81 FR 77008 - Medicare Program; Merit-Based Incentive Payment System (MIPS) and Alternative Payment Model (APM) Incentive Under the Physician Fee Schedule, and Criteria for Physician-Focused Payment Models
DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Medicare & Medicaid Services
Federal Register Volume 81, Issue 214 (November 4, 2016)
Page Range
77008-77831
FR Document
2016-25240
The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) repeals the Medicare sustainable growth rate (SGR) methodology for updates to the physician fee schedule (PFS) and replaces it with a new approach to payment called the Quality Payment Program that rewards the delivery of high-quality patient care through two avenues: Advanced Alternative Payment Models (Advanced APMs) and the Merit-based Incentive Payment System (MIPS) for eligible clinicians or groups under the PFS. This final rule with comment period establishes incentives for participation in certain alternative payment models (APMs) and includes the criteria for use by the Physician-Focused Payment Model Technical Advisory Committee (PTAC) in making comments and recommendations on physician-focused payment models (PFPMs). Alternative Payment Models are payment approaches, developed in partnership with the clinician community, that provide added incentives to deliver high-quality and cost-efficient care. APMs can apply to a specific clinical condition, a care episode, or a population. This final rule with comment period also establishes the MIPS, a new program for certain Medicare-enrolled practitioners. MIPS will consolidate components of three existing programs, the Physician Quality Reporting System (PQRS), the Physician Value-based Payment Modifier (VM), and the Medicare Electronic Health Record (EHR) Incentive Program for Eligible Professionals (EPs), and will continue the focus on quality, cost, and use of certified EHR technology (CEHRT) in a cohesive program that avoids redundancies. In this final rule with comment period we have rebranded key terminology based on feedback from stakeholders, with the goal of selecting terms that will be more easily identified and understood by our stakeholders.
Federal Register, Volume 81 Issue 214 (Friday, November 4, 2016)
[Federal Register Volume 81, Number 214 (Friday, November 4, 2016)]
[Rules and Regulations]
[Pages 77008-77831]
From the Federal Register Online [www.thefederalregister.org]
[FR Doc No: 2016-25240]
[[Page 77007]]
Vol. 81
Friday,
No. 214
November 4, 2016
Part II
Department of Health and Human Services
-----------------------------------------------------------------------
Centers for Medicare & Medicaid Services
-----------------------------------------------------------------------
42 CFR Parts 414 and 495
Medicare Program; Merit-Based Incentive Payment System (MIPS) and
Alternative Payment Model (APM) Incentive Under the Physician Fee
Schedule, and Criteria for Physician-Focused Payment Models; Final Rule
Federal Register / Vol. 81 , No. 214 / Friday, November 4, 2016 /
Rules and Regulations
[[Page 77008]]
-----------------------------------------------------------------------
DEPARTMENT OF HEALTH AND HUMAN SERVICES
Centers for Medicare & Medicaid Services
42 CFR Parts 414 and 495
[CMS-5517-FC]
RIN 0938-AS69
Medicare Program; Merit-Based Incentive Payment System (MIPS) and
Alternative Payment Model (APM) Incentive Under the Physician Fee
Schedule, and Criteria for Physician-Focused Payment Models
AGENCY: Centers for Medicare & Medicaid Services (CMS), HHS.
ACTION: Final rule with comment period.
-----------------------------------------------------------------------
SUMMARY: The Medicare Access and CHIP Reauthorization Act of 2015
(MACRA) repeals the Medicare sustainable growth rate (SGR) methodology
for updates to the physician fee schedule (PFS) and replaces it with a
new approach to payment called the Quality Payment Program that rewards
the delivery of high-quality patient care through two avenues: Advanced
Alternative Payment Models (Advanced APMs) and the Merit-based
Incentive Payment System (MIPS) for eligible clinicians or groups under
the PFS. This final rule with comment period establishes incentives for
participation in certain alternative payment models (APMs) and includes
the criteria for use by the Physician-Focused Payment Model Technical
Advisory Committee (PTAC) in making comments and recommendations on
physician-focused payment models (PFPMs). Alternative Payment Models
are payment approaches, developed in partnership with the clinician
community, that provide added incentives to deliver high-quality and
cost-efficient care. APMs can apply to a specific clinical condition, a
care episode, or a population. This final rule with comment period also
establishes the MIPS, a new program for certain Medicare-enrolled
practitioners. MIPS will consolidate components of three existing
programs, the Physician Quality Reporting System (PQRS), the Physician
Value-based Payment Modifier (VM), and the Medicare Electronic Health
Record (EHR) Incentive Program for Eligible Professionals (EPs), and
will continue the focus on quality, cost, and use of certified EHR
technology (CEHRT) in a cohesive program that avoids redundancies. In
this final rule with comment period we have rebranded key terminology
based on feedback from stakeholders, with the goal of selecting terms
that will be more easily identified and understood by our stakeholders.
DATES: Effective date: The provisions of this final rule with comment
period are effective on January 1, 2017.
Comment date: To be assured consideration, comments must be
received at one of the addresses provided below, no later than 5 p.m.
on December 19, 2016.
ADDRESSES: In commenting, please refer to file code CMS-5517-FC.
Because of staff and resource limitations, we cannot accept comments by
facsimile (FAX) transmission. You may submit comments in one of four
ways (please choose only one of the ways listed):
1. Electronically. You may submit electronic comments on this
regulation to http://www.regulations.gov. Follow the ``Submit a
comment'' instructions.
2. By regular mail. You may mail written comments to the following
address ONLY: Centers for Medicare & Medicaid Services, Department of
Health and Human Services, Attention: CMS-5517-FC, P.O. Box 8013,
Baltimore, MD 21244-8013.
Please allow sufficient time for mailed comments to be received
before the close of the comment period.
3. By express or overnight mail. You may send written comments to
the following address ONLY: Centers for Medicare & Medicaid Services,
Department of Health and Human Services, Attention: CMS-5517-FC, Mail
Stop C4-26-05, 7500 Security Boulevard, Baltimore, MD 21244-1850.
4. By hand or courier. Alternatively, you may deliver (by hand or
courier) your written comments ONLY to the following addresses prior to
the close of the comment period:
a. For delivery in Washington, DC--Centers for Medicare & Medicaid
Services, Department of Health and Human Services, Room 445-G, Hubert
H. Humphrey Building, 200 Independence Avenue SW., Washington, DC
20201.
(Because access to the interior of the Hubert H. Humphrey Building
is not readily available to persons without Federal government
identification, commenters are encouraged to leave their comments in
the CMS drop slots located in the main lobby of the building. A stamp-
in clock is available for persons wishing to retain a proof of filing
by stamping in and retaining an extra copy of the comments being
filed.)
b. For delivery in Baltimore, MD--Centers for Medicare & Medicaid
Services, Department of Health and Human Services, 7500 Security
Boulevard, Baltimore, MD 21244-1850.
If you intend to deliver your comments to the Baltimore address,
call telephone number (410) 786-7195 in advance to schedule your
arrival with one of our staff members.
Comments erroneously mailed to the addresses indicated as
appropriate for hand or courier delivery may be delayed and received
after the comment period.
For information on viewing public comments, see the beginning of
the SUPPLEMENTARY INFORMATION section.
FOR FURTHER INFORMATION CONTACT: Molly MacHarris, (410) 786-4461, for
inquiries related to MIPS. James P. Sharp, (410) 786-7388, for
inquiries related to APMs.
SUPPLEMENTARY INFORMATION:
Table of Contents
I. Executive Summary
II. Provisions of the Proposed Regulations and Analysis of and
Responses to Comments
A. Establishing MIPS and the Advanced APM Incentive
B. Program Principles and Goals
C. Changes to Existing Programs
D. Definitions
E. MIPS Program Details
F. Overview of Incentives for Participation in Advanced
Alternative Payment Models
III. Collection of Information Requirements
IV. Regulatory Impact Analysis
A. Statement of Need
B. Overall Impact
C. Changes in Medicare Payments
D. Impact on Beneficiaries
E. Impact on Other Health Care Programs and Providers
F. Alternatives Considered
G. Assumptions and Limitations
H. Accounting Statement
Acronyms
Because of the many terms to which we refer by acronym in this rule, we
are listing the acronyms used and their corresponding meanings in
alphabetical order below:
ABCTM Achievable Benchmark of Care
ACO Accountable Care Organization
APM Alternative Payment Model
APRN Advanced Practice Registered Nurse
ASPE HHS' Office of the Assistant Secretary for Planning and
Evaluation
BPCI Bundled Payments for Care Improvement
CAH Critical Access Hospital
CAHPS Consumer Assessment of Healthcare Providers and Systems
CBSA Non-Core Based Statistical Area
CDS Clinical Decision Support
CEHRT Certified EHR technology
CFR Code of Federal Regulations
CHIP Children's Health Insurance Program
CJR Comprehensive Care for Joint Replacement
CMMI Center for Medicare & Medicaid Innovation (CMS Innovation
Center)
COI Collection of Information
[[Page 77009]]
CPIA Clinical Practice Improvement Activity
CPOE Computerized Provider Order Entry
CPR Customary, Prevailing, and Reasonable
CPS Composite Performance Score
CPT Current Procedural Terminology
CQM Clinical Quality Measure
CY Calendar Year
eCQM electronic Clinician Quality Measure
ED Emergency Department
EHR Electronic Health Record
EP Eligible Professional
ESRD End-Stage Renal Disease
FFS Fee-for-Service
FR Federal Register
FQHC Federally Qualified Health Center
GAO Government Accountability Office
HIE Health Information Exchange
HIPAA Health Insurance Portability and Accountability Act of 1996
HITECH Health Information Technology for Economic and Clinical
Health
HPSA Health Professional Shortage Area
HHS Department of Health & Human Services
HRSA Health Resources and Services Administration
IHS Indian Health Service
IT Information Technology
LDO Large Dialysis Organization
MACRA Medicare Access and CHIP Reauthorization Act of 2015
MEI Medicare Economic Index
MIPAA Medicare Improvements for Patients and Providers Act of 2008
MIPS Merit-based Incentive Payment System
MLR Minimum Loss Rate
MSPB Medicare Spending per Beneficiary
MSR Minimum Savings Rate
MUA Medically Underserved Area
NPI National Provider Identifier
OCM Oncology Care Model
ONC Office of the National Coordinator for Health Information
Technology
PECOS Medicare Provider Enrollment, Chain, and Ownership System
PFPMs Physician-Focused Payment Models
PFS Physician Fee Schedule
PHS Public Health Service
PQRS Physician Quality Reporting System
PTAC Physician-Focused Payment Model Technical Advisory Committee
QCDR Qualified Clinical Data Registry
QP Qualifying APM Participant
QRDA Quality Reporting Document Architecture
QRUR Quality and Cost Reports
RBRVS Resource-Based Relative Value Scale
RFI Request for Information
RHC Rural Health Clinic
RIA Regulatory Impact Analysis
RVU Relative Value Unit
SGR Sustainable Growth Rate
TCPI Transforming Clinical Practice Initiative
TIN Tax Identification Number
VM Value-Based Payment Modifier
VPS Volume Performance Standard
I. Executive Summary
1. Overview
The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA)
(Pub. L. 114-10, enacted April 16, 2015), amended title XVIII of the
Social Security Act (the Act) to repeal the Medicare sustainable growth
rate, to reauthorize the Children's Health Insurance Program, and to
strengthen Medicare access by improving physician and other clinician
payments and making other improvements. This rule finalizes policies to
improve physician and other clinician payments by changing the way
Medicare incorporates quality measurement into payments and by
developing new policies to address and incentivize participation in
Alternative Payment Models (APMs). These unified policies to promote
greater value within the healthcare system are referred to as the
Quality Payment Program.
The MACRA, landmark bipartisan legislation, advances a forward-
looking, coordinated framework for health care providers to
successfully take part in the CMS Quality Payment Program that rewards
value and outcomes in one of two ways:
Advanced Alternative Payment Models (Advanced APMs).
Merit-based Incentive Payment System (MIPS).
The MACRA marks a milestone in efforts to improve and reform the
health care system. Building off of the successful coverage expansions
and improvements to access under the Patient Protection and Affordable
Care Act (Affordable Care Act), the MACRA puts an increased focus on
the quality and value of care delivered. By implementing MACRA to
promote participation in certain APMs, such as the Shared Saving
Program, Medical Home Models, and innovative episode payment models for
cardiac and joint care, and by paying eligible clinicians for quality
and value under MIPS, we support the nation's progress toward achieving
a patient-centered health care system that delivers better care,
smarter spending, and healthier people and communities. By driving
significant changes in how care is delivered to make the health care
system more responsive to patients and families, we believe the Quality
Payment Program supports eligible clinicians in improving the health of
their patients, including encouraging interested eligible clinicians in
their successful transition into APMs. To implement this vision, we are
finalizing a program that emphasizes high-quality care and patient
outcomes while minimizing burden on eligible clinicians and that is
flexible, highly transparent, and improves over time with input from
clinical practices. To aid in this process, we have sought feedback
from the health care community through various public avenues and
solicited comment through the proposed rule. As we establish policies
for effective implementation of the MACRA, we do so with the explicit
understanding that technology, infrastructure, physician support
systems, and clinical practices will change over the next few years. In
addition, we are aware of the diversity of clinician practices in their
experience with quality-based payments. As a result of these factors,
we expect the Quality Payment Program to evolve over multiple years in
order to achieve our national goals. In the early years of the program,
we will begin by laying the groundwork for expansion towards an
innovative, outcome-focused, patient-centered, resource-effective
health system. Through a staged approach, we can develop policies that
are operationally feasible and made in consideration of system
capabilities and our core strategies to drive progress and reform
efforts. Thus, due to this staged approach, we are finalizing the rule
with a comment period. We commit to continue iterating on these
policies.
The Quality Payment Program aims to do the following: (1) Support
care improvement by focusing on better outcomes for patients, decreased
provider burden, and preservation of independent clinical practice; (2)
promote adoption of alternative payment models that align incentives
across healthcare stakeholders; and (3) advance existing efforts of
Delivery System Reform, including ensuring a smooth transition to a new
system that promotes high-quality, efficient care through unification
of CMS legacy programs.
This final rule with comment period establishes the Quality Payment
Program and its two interrelated pathways: Advanced APMs and the MIPS.
This final rule with comment period establishes incentives for
participation in Advanced APMs, supporting the Administration's goals
of transitioning from fee-for-service (FFS) payments to payments for
quality and value, including approaches that focus on better care,
smarter spending, and healthier people. This final rule with comment
period also includes definitions of Qualifying APM Participants (QPs)
in Advanced APMs and outlines the criteria for use by the Physician-
Focused Payment Model Technical Advisory Committee (PTAC) in making
comments and recommendations to the Secretary on physician-focused
payment models (PFPMs).
MIPS is a new program for certain Medicare-participating eligible
[[Page 77010]]
clinicians that will make payment adjustments based on performance on
quality, cost and other measures, and will consolidate components of
three existing programs--the Physician Quality Reporting System (PQRS),
the Physician Value-based Payment Modifier (VM), and the Medicare
Electronic Health Record (EHR) Incentive Program for eligible
professionals (EPs). As prescribed by Congress, MIPS will focus on:
Quality--both a set of evidence-based, specialty-specific standards as
well as practice-based improvement activities; cost; and use of
certified electronic health record (EHR) technology (CEHRT) to support
interoperability and advanced quality objectives in a single, cohesive
program that avoids redundancies. Many features of MIPS are intended to
simplify and integrate further during the second and third years.
2. Quality Payment Program Strategic Objectives
We solicited and reviewed over 4000 comments and had over 100,000
physicians and other stakeholders attend our outreach sessions. Through
this outreach, we created six strategic objectives to drive continued
progress and improvement.
These objectives guided our final policies and will guide our
future rulemaking in order to design, implement and evolve a Quality
Payment Program that aims to improve health outcomes, promote smarter
spending, minimize burden of participation, and provide fairness and
transparency in operations. These strategic objectives are as follows:
(1) To improve beneficiary outcomes and engage patients through
patient-centered Advanced APM and MIPS policies; (2) to enhance
clinician experience through flexible and transparent program design
and interactions with easy-to-use program tools; (3) to increase the
availability and adoption of robust Advanced APMs; (4) to promote
program understanding and maximize participation through customized
communication, education, outreach and support that meet the needs of
the diversity of physician practices and patients, especially the
unique needs of small practices; (5) to improve data and information
sharing to provide accurate, timely, and actionable feedback to
clinicians and other stakeholders; and (6) to ensure operational
excellence in program implementation and ongoing development. More
information on these objectives and the Quality Payment Program can be
found at QualityPaymentProgram.cms.gov.
With these objectives we recognize that the Quality Payment Program
provides new opportunities to improve care delivery by supporting and
rewarding clinicians as they find new ways to engage patients, families
and caregivers and to improve care coordination and population health
management. In addition, we recognize that by developing a program that
is flexible instead of one-size-fits-all, clinicians will be able to
choose to participate in a way that is best for them, their practice,
and their patients. For clinicians interested in APMs, we believe that
by setting ambitious yet achievable goals, eligible clinicians will
move with greater certainty toward these new approaches of delivering
care. To these ends, and to ensure this program works for all
stakeholders, we further recognize that we must provide ongoing
education, support, and technical assistance so that clinicians can
understand program requirements, use available tools to enhance their
practices, and improve quality and progress toward participation in
alternative payment models if that is the best choice for their
practice. Finally, we understand that we must achieve excellence in
program management, focusing on customer needs, promoting problem-
solving, teamwork, and leadership to provide continuous improvements in
the Quality Payment Program.
3. One Quality Payment Program
Clinicians have told us that they do not separate their patient
care into domains, and that the Quality Payment Program needs to
reflect typical clinical workflows in order to achieve its goals of
better patient care. Advanced APMs, the focus of one pathway of the
Quality Payment Program, contribute to better care and smarter spending
by allowing physicians and other clinicians to deliver coordinated,
customized, high-quality care to their patients within a streamlined
payment system. Within MIPS, the second pathway of the Quality Payment
Program, we believe that the unification into one Quality Payment
Program can best be accomplished by making connections across the four
pillars of the MIPS payment structure identified in the MACRA
legislation--quality, clinical practice improvement activities
(referred to as ``improvement activities''), meaningful use of CEHRT
(referred to as ``advancing care information''), and resource use
(referred to as ``cost'')--and by emphasizing that the Quality Payment
Program is at its core about improving the quality of patient care.
Indeed, the bedrock of the Quality Payment Program is high-quality,
patient-centered care followed by useful feedback, in a continuous
cycle of improvement. The principal way MIPS measures quality of care
is through evidence-based clinical quality measures (CQMs) which MIPS
eligible clinicians can select, the vast majority of which are created
by or supported by clinical leaders and endorsed by a consensus-based
process. Over time, the portfolio of quality measures will grow and
develop, driving towards outcomes that are of the greatest importance
to patients and clinicians. Through MIPS, we have the opportunity to
measure quality not only through clinician-proposed measures, but to
take it a step further by also accounting for activities that
physicians themselves identify: Namely, practice-driven quality
improvement. The MACRA requires us to measure whether technology is
used meaningfully. Based on significant feedback, this area is
simplified into supporting the exchange of patient information and how
technology specifically supports the quality goals selected by the
practice. The cost performance category has also been simplified and
weighted at zero percent of the final score for the transition year of
CY 2017. Given the primary focus on quality, we have accordingly
indicated our intention to align these measures fully to the quality
measures over time in the scoring system (see section II.E.6.a. for
further details). That is, we are establishing special policies for the
first year of the Quality Payment Program, which we refer to as the
``transition year'' throughout this final rule with comment period;
this transition year corresponds to the first performance period of the
program, calendar year (CY) 2017, and the first payment year, CY 2019.
We envision that it will take a few years to reach a steady state in
the program, and we therefore anticipate a ramp-up process and gradual
transition with less financial risk for clinicians in at least the
first 2 years. In the transition year in 2017, we will test this
performance category alignment, for example by allowing certain
improvement activities that are completed using CEHRT to achieve a
bonus score in the advancing care information performance category with
the intent of analyzing adoption, and in future years, potentially
adding activities that reinforce integration of the program. Our hope
is for the program to evolve to the point where all the clinical
activities captured in MIPS across the four performance categories
reflect the single, unified goal of quality improvement.
[[Page 77011]]
4. Summary of the Major Provisions
a. Transition Year and Iterative Learning and Development Period
We recognize, as described through many insightful comments, that
many eligible clinicians face challenges in understanding the
requirements and being prepared to participate in the Quality Payment
Program in 2017. As a result, we have decided to finalize transitional
policies throughout this final rule with comment period, which will
focus the program in its initial years on encouraging participation and
educating clinicians, all with the primary goal of placing the patient
at the center of the healthcare system. At the same time, we will also
increase opportunities to join Advanced APMs, allowing eligible
clinicians who chose to do so an opportunity to participate.
Given the wide diversity of clinical practices, the initial
development period of the Quality Payment Program implementation would
allow physicians to pick their pace of participation for the first
performance period that begins January 1, 2017. Eligible clinicians
will have three flexible options to submit data to MIPS and a fourth
option to join Advanced APMs in order to become QPs, which would ensure
they do not receive a negative payment adjustment in 2019.
In the transition year CY 2017 of the program, this rule finalizes
a period during which clinicians and CMS will build capabilities to
report and gain experience with the program. Clinicians can choose
their course of participation in this year with four options.
(1) Clinicians can choose to report to MIPS for a full 90-day
period or, ideally, the full year, and maximize the MIPS eligible
clinician's chances to qualify for a positive adjustment. In addition,
MIPS eligible clinicians who are exceptional performers in MIPS, as
shown by the practice information that they submit, are eligible for an
additional positive adjustment for each year of the first 6 years of
the program.
(2) Clinicians can choose to report to MIPS for a period of time
less than the full year performance period 2017 but for a full 90-day
period at a minimum and report more than one quality measure, more than
one improvement activity, or more than the required measures in the
advancing care information performance category in order to avoid a
negative MIPS payment adjustment and to possibly receive a positive
MIPS payment adjustment.
(3) Clinicians can choose to report one measure in the quality
performance category; one activity in the improvement activities
performance category; or report the required measures of the advancing
care information performance category and avoid a negative MIPS payment
adjustment. Alternatively, if MIPS eligible clinicians choose to not
report even one measure or activity, they will receive the full
negative 4 percent adjustment.
(4) MIPS eligible clinicians can participate in Advanced APMs, and
if they receive a sufficient portion of their Medicare payments or see
a sufficient portion of their Medicare patients through the Advanced
APM, they will qualify for a 5 percent bonus incentive payment in 2019.
We are finalizing the 2017 performance period for the 2019 MIPS
payment year to be a transition year as part of the development period
in the program. For this transition year, for MIPS the performance
threshold will be lowered to a threshold of 3 points. Clinicians who
achieve a final score of 70 or higher will be eligible for the
exceptional performance adjustment, funded from a pool of $500 million.
For full participation in MIPS and in order to achieve the highest
possible final scores, MIPS eligible clinicians are encouraged to
submit measures and activities in all three integrated performance
categories: Quality, improvement activities, and advancing care
information. To address public comments on the cost performance
category, the weighting of the cost performance category has been
lowered to 0 percent for the transition year. For full participation in
the quality performance category, clinicians will report on six quality
measures, or one specialty-specific or subspecialty-specific measure
set. For full participation in the advancing care information
performance category, MIPS eligible clinicians will report on five
required measures. For full participation in the improvement activities
performance category, clinicians can engage in up to four activities,
rather than the proposed six activities, to earn the highest possible
score of 40.
For the transition year CY 2017, for quality, clinicians who submit
one out of at least six quality measures will meet the MIPS performance
threshold of 3; however, more measures are required for groups who
submit measures using the CMS Web Interface. For the transition year CY
2017, for quality, higher measure points may be awarded based on
achieving higher performance in the measure. For improvement
activities, attesting to at least one improvement activity will also be
sufficient to meet the MIPS performance threshold in the transition
year CY 2017. For advancing care information, clinicians reporting on
the required measures in that category will meet the performance
threshold in the transition year. These transition year policies for CY
2017 will encourage participation by clinicians and will provide a ramp
up period for clinicians to prepare for higher performance thresholds
in the second year of the program.
Historical evidence has shown that clinical practices of all sizes
can successfully submit data, including over 110,000 solo and small
practices with 15 or fewer clinicians who participated in PQRS in 2015.
The transition year and development period approach gives clinicians
structured, practical choices that can best suit their practices.
Resources will be made available to assist clinicians and practices
through this transition. The hope is that by lowering the barriers to
participation at the outset, we can set the foundation for a program
that supports long-term, high-quality patient care through feedback and
open communication between CMS and other stakeholders.
We anticipate that the iterative learning and development period
will last longer than the first year, CY 2017, of the program as we
move towards a steady state; therefore, we envision CY 2018 to also be
transitional in nature to provide a ramp-up of the program and of the
performance thresholds. We anticipate making proposals on the
parameters of this second transition year through rule-making in 2017.
b. Legacy Quality Reporting Programs
This final rule with comment period will sunset payment adjustments
under the current Medicare EHR Incentive Program for EPs (section
1848(o) of the Act), the PQRS (section 1848(k) and (m) of the Act), and
the VM (section 1848(p) of the Act) programs after CY2018. Components
of these three programs will be carried forward into MIPS. This final
rule with comment period establishes new subpart O of our regulations
at 42 CFR part 414 to implement the new MIPS program as required by the
MACRA.
c. Significant Changes From Proposed Rule
In developing this final rule with comment period, we sought
feedback from stakeholders throughout the process, including through
Requests for Information in October 2015 and through the comment
process for the proposed rule from April to June 2016. We received
thousands of comments from a broad range of sources including
professional associations and societies, physician practices,
hospitals, patient
[[Page 77012]]
groups, and health IT vendors, and we thank our many commenters and
acknowledge their valued input throughout the proposed rule process. In
response to comments to the proposed rule, we have made significant
changes in this final rule with comment period, including (1)
bolstering support for small and independent practices; (2)
strengthening the movement towards Advanced Alternative Payment Models
by offering potential new opportunities such as the Medicare ACO Track
1+ (3) securing a strong start to the program with a flexible, pick-
your-own-pace approach to the initial years of the program; and (4)
connecting the statutory domains into one unified program that supports
clinician-driven quality improvement. These themes are illustrated in
the following specific policy changes: (1) The creation of a transition
year and iterative learning and development period in the beginning of
the program; (2) the adjustment of the MIPS low-volume threshold; (3)
the establishment of an Advanced APM financial risk standard that
promotes participation in robust, high-quality models; (4) the
simplification of prior ``all-or-nothing'' requirements in the use of
certified EHR technology; and (5) the establishment of Medical Home
Model standards that promote care coordination.
We intend to continue open communication with stakeholders,
including consultation with tribes and tribal officials, on an ongoing
basis as we develop the Quality Payment Program in future years.
d. Small Practices
As outlined above, protection of small, independent practices is an
important thematic objective for this final rule with comment. For
2017, many small practices will be excluded from new requirements due
to the low-volume threshold, which has been set at less than or equal
to $30,000 in Medicare Part B allowed charges or less than or equal to
100 Medicare patients, representing 32.5 percent of pre-exclusion
Medicare clinicians but only 5 percent of Medicare Part B spending.
Stakeholder comments suggested setting a higher low-volume threshold
for exclusion from MIPS but allowing clinicians that would be excluded
by the threshold to opt in to the program if they wished to report to
MIPS and receive a MIPS payment adjustment for the year. We considered
this option but determined that it was inconsistent with the statutory
MIPS exclusion based on the low-volume threshold. We anticipate that
more clinicians will be determined to be eligible to participate in the
program in future years.
MACRA also provides that solo and small practices may join
``virtual groups'' and combine their MIPS reporting. Many commenters
suggested that we allow groups with more than 10 clinicians to
participate as virtual groups. As noted, the statute limits the virtual
group option to individuals and groups of not more than 10 clinicians.
We are not implementing virtual groups in the transition year CY 2017
of the program; however, through the policies of the transition year
and development period, we believe we have addressed some of the
concerns expressed by clinicians hesitant to participate in the Quality
Payment Program. CMS wants to make sure the virtual group technology is
meaningful and simple to use for clinicians, and we look forward to
stakeholder engagement on how to structure and implement virtual groups
in future years of the program.
In keeping with the objectives of providing education about the
program and maximizing participation, and as mandated by the MACRA,
$100 million in technical assistance will be available to MIPS eligible
clinicians in small practices, rural areas, and practices located in
geographic health professional shortage areas (HPSAs), including IHS,
tribal, and urban Indian clinics, through contracts with quality
improvement organizations, regional health collaboratives, and others
to offer guidance and assistance to MIPS eligible clinicians in
practices of 15 or fewer MIPS eligible clinicians. Priority will be
given to practices located in rural areas, defined as clinicians in zip
codes designated as rural, using the most recent Health Resources and
Services Administration (HRSA) Area Health Resource File data set
available; medically underserved areas (MUAs); and practices with low
MIPS final scores or in transition to APM participation. The MACRA also
includes provisions requiring an examination of the pooling of
financial risk for physician practices, in particular for small
practices. Specifically, section 101(c)(2)(C) of MACRA requires the
Government Accountability Office (GAO) to submit a report to Congress,
not later than January 1, 2017, examining whether entities that pool
financial risk for physician practices, such as independent risk
managers, can play a role in supporting physician practices,
particularly small physician practices, in assuming financial risk for
the treatment of patients. We have been closely engaged with the GAO
throughout their study to better understand the unique needs and
challenges faced by clinicians in small practices and practices in
rural or health professional shortage areas. We have provided
information to the GAO, and the GAO has shared some of their initial
findings regarding these challenges. We look forward to further
engagement with the GAO on this topic and to the release of GAO's final
report. Using the knowledge obtained from small practices, other
stakeholders, and the public, as well as from GAO, we continue to work
to improve the flexibility and support available to small, underserved,
and rural practices. Throughout the evolution of the Quality Payment
Program that will unfold over the years to come, CMS is committed to
working together with stakeholders to address the unique challenges
these practices encounter.
Using updated policies for the transition year and development
period, we performed an updated regulatory impact analysis, including
for small and solo practices. With the extensive changes to policy and
increased flexibility, we believe that estimating impacts of this final
rule with comment period using only historic 2015 quality submission
data significantly overestimates the impact on small and solo
practices. Although small and solo practices have historically been
less likely to engage in PQRS and quality reporting, we believe that
small and solo practices will respond to MIPS by participating at a
rate close to that of other practice sizes. In order to quantify the
impact of the rule on MIPS eligible clinicians, including small and
solo practices, we have prepared two sets of analyses that assume the
participation rates for some categories of small practices will be
similar to those of other practice size categories. Specifically, our
primary analysis assumes that each practice size grouping will achieve
at least 90 percent participation rate and our alternative assumption
is that each practice size grouping will achieve at least an 80 percent
participation rate. In both sets of analyses, we estimate that over 90
percent of MIPS eligible clinicians will receive a positive or neutral
MIPS payment adjustment in the transition year, and that at least 80
percent of clinicians in small and solo practices with 1-9 clinicians
will receive a positive or neutral MIPS payment adjustment.
e. Advanced Alternative Payment Models (Advanced APMs)
In this rule, we finalize requirements we will use for the purposes
of the incentives for participation in Advanced
[[Page 77013]]
APMs, and the following is a summary of our finalized policies. The
MACRA defines APM for the purposes of the incentive as a model under
section 1115A of the Act (excluding a health care innovation award),
the Shared Savings Program under section 1899 of the Act, a
demonstration under section 1866C of the Act, or a demonstration
required by federal law.
APMs represent an important step forward in the Administration's
efforts to move our healthcare system from volume-based to value-based
care. APMs that meet the criteria to be Advanced APMs provide the
pathway through which eligible clinicians, who would otherwise
participate in MIPS, can become Qualifying APM Participants (QPs), and
therefore, earn incentive payments for their Advanced APM
participation. In the proposed rule, we estimated that 30,000 to 90,000
clinicians would be QPs in 2017. With new Advanced APMs expected to
become available for participation in 2017 and 2018, including the
Medicare ACO Track 1 Plus (1+), and anticipated amendments to reopen
applications for or modify current APMs, such as the Maryland All-Payer
Model and Comprehensive Care for Joint Replacement (CJR) model, we
anticipate higher numbers of QPs--approximately 70,000 to 120,000 in
2017 and 125,000 to 250,000 in 2018.
As discussed in section II.F.4.b. of this final rule with comment
period, we are exploring development of the Medicare ACO Track 1+ Model
to begin in 2018. The model would be voluntary for ACOs currently
participating in Track 1 of the Shared Savings Program or ACOs seeking
to participate in the Shared Savings Program for the first time. It
would test a payment model that incorporates more limited downside risk
than is currently present in Tracks 2 or 3 of the Shared Savings
Program but sufficient financial risk in order to be an Advanced APM.
We will announce additional information about the model in the future.
This rule finalizes two types of Advanced APMs: Advanced APMs and
Other Payer Advanced APMs. To be considered an Advanced APM, an APM
must meet all three of the following criteria, as required under
section 1833(z)(3)(D) of the Act: (1) The APM must require participants
to use CEHRT; (2) The APM must provide for payment for covered
professional services based on quality measures comparable to those in
the quality performance category under MIPS and; (3) The APM must
either require that participating APM Entities bear risk for monetary
losses of a more than nominal amount under the APM, or be a Medical
Home Model expanded under section 1115A(c) of the Act. In this rule, we
finalize proposals pertaining to all of these criteria.
To be an Other Payer Advanced APM, as set forth in section
1833(z)(2) of the Act, a payment arrangement with a payer (for example,
Medicaid or a commercial payer) must meet all three of the following
criteria: (1) The payment arrangement must require participants to use
CEHRT; (2) The payment arrangement must provide for payment for covered
professional services based on quality measures comparable to those in
the quality performance category under MIPS and; (3) The payment
arrangement must require participants to either bear more than nominal
financial risk if actual aggregate expenditures exceed expected
aggregate expenditures; or be a Medicaid Medical Home Model that meets
criteria comparable to Medical Home Models expanded under section
1115A(c) of the Act.
We are completing an initial set of Advanced APM determinations
that we will release as soon as possible but no later than January 1,
2017. For new APMs that are announced after the initial determination,
we will include Advanced APM determinations in conjunction with the
first public notice of the APM, such as the Request for Applications
(RFA) or final rule. All determinations of Advanced APMs will be posted
on our Web site and updated on an ad hoc basis, but no less frequently
than annually, as new APMs become available and others end or change.
An important avenue for the creation of innovative payment models
is the PTAC, created by the MACRA. The PTAC is an 11-member independent
federal advisory committee to the HHS Secretary. The PTAC will review
stakeholders' proposed PFPMs, and make comments and recommendations to
the Secretary regarding whether the PFPMs meet criteria established by
the Secretary. PTAC comments and recommendations will be reviewed by
the CMS Innovation Center and the Secretary, and we will post a
detailed response to them on the CMS Web site.
(i) QP Determination
QPs are eligible clinicians in an Advanced APM who have a certain
percentage of their patients or payments through an Advanced APM. QPs
are excluded from MIPS and receive a 5 percent incentive payment for a
year beginning in 2019 through 2024. We finalize our proposal that
professional services furnished at Critical Access Hospitals (CAHs),
Rural Health Clinics (RHCs), and Federally Qualified Health Centers
(FQHCs) that meet certain criteria be counted towards the QP
determination using the patient count method.
We finalize definitions of Medical Home Model and Medicaid Medical
Home Model and the unique standards by which Medical Home Models may
meet the financial risk criterion to be an Advanced APM.
The statute sets thresholds for the level of participation in
Advanced APMs required for an eligible clinician to become a QP for a
year. The Medicare Option, based on Part B payments for covered
professional services or counts of patients furnished covered
professional services under Part B, is applicable beginning in the
payment year 2019. The All-Payer Combination Option, which utilizes the
Medicare Option as well as an eligible clinician's participation in
Other Payer Advanced APMs, is applicable beginning in the payment year
2021. For eligible clinicians to become QPs through the All-Payer
Combination Option, an Advanced APM Entity or eligible clinician must
participate in an Advanced APM under Medicare and also submit
information to CMS so that we can determine whether payment
arrangements with non-Medicare payers are an Other Payer Advanced APMs
and whether an eligible clinician meets the requisite QP threshold of
participation. We are finalizing our methodologies to evaluate eligible
clinicians using the Medicare and All-Payer Combination Options.
We are finalizing the two methods by which we will calculate
Threshold Scores to compare to the QP thresholds and make QP
determinations for eligible clinicians. The payment amount method
assesses the amount of payments for Part B covered professional
services that are furnished through an Advanced APM. The patient count
method assesses the amount of patients furnished Part B covered
professional services through an Advanced APM.
We are finalizing our proposal to identify individual eligible
clinicians by a unique APM participant identifier using the
individuals' APM, APM Entity, and TIN/NPI combinations, and to assess
as an APM Entity group all individual eligible clinicians listed as
participating in an Advanced APM Entity to determine their QP status
for a year. We are finalizing that if an individual eligible clinician
who participates in multiple Advanced APM Entities does not achieve QP
status through participation in any single APM Entity, we will assess
the eligible
[[Page 77014]]
clinician individually to determine QP status based on combined
participation in Advanced APMs.
We are finalizing the method to calculate and disburse the lump-sum
APM Incentive Payments to QPs, and we are finalizing a specific
approach for calculating the APM Incentive Payment when a QP also
receives non-FFS payments or has received payment adjustments through
the Medicare EHR Incentive Program, PQRS, VM, or MIPS during the prior
period used for determining the APM Incentive Payment.
We are finalizing a modified policy such that, following a final
determination that an Advanced APM Entity group or eligible clinician
is determined to be a Partial Qualifying APM Participant (Partial QP),
the Advanced APM Entity--or eligible clinician in the case of an
individual determination--will make an election on behalf of all of its
eligible clinicians in the group of whether to report to MIPS, thus
making all eligible clinicians in the Advanced APM Entity group subject
to MIPS payment adjustments; or not report to MIPS, thus excluding all
eligible clinicians in the APM Entity group from MIPS adjustments. We
finalize our proposals to vet and monitor APM Entities, Advanced APM
Entities, and eligible clinicians participating in those entities. We
are finalizing a definition for PFPMs and criteria for use by the PTAC
in fulfilling its responsibility to evaluate proposals for PFPMs.
We are finalizing an accelerated timeline for making QP
determinations, and will notify eligible clinicians of their QP status
as soon as possible, in advance of the end of the MIPS performance
period so that QPs will know whether they are excluded from MIPS prior
to having to submit information to CMS for purposes of MIPS.
We are finalizing the requirement that MIPS eligible clinicians, as
well as EPs, eligible hospitals, and CAHs under the existing Medicare
and Medicaid EHR Incentive Programs demonstrate cooperation with
certain provisions concerning blocking the sharing of information under
section 106(b)(2) of the MACRA and, separately, to demonstrate
engagement with activities that support health care providers with the
performance of their CEHRT such as cooperation with ONC direct review
of certified health information technologies.
f. Merit-Based Incentive Payment System (MIPS)
In establishing MIPS, this final rule with comment period will
define MIPS participants as ``MIPS eligible clinicians'' rather than
``MIPS EPs'' as that term is defined at section 1848(q)(1)(C) and used
throughout section 1848(q) of the Act. MIPS eligible clinicians will
include physicians, physician assistants, nurse practitioners, clinical
nurse specialists, certified registered nurse anesthetists, and groups
that include such clinicians who bill under Medicare Part B. The rule
finalizes definitions and requirements for groups. In addition to
finalizing definitions for MIPS eligible clinicians, the rule also
finalizes rules for the specific Medicare-enrolled clinicians that will
be excluded from MIPS, including newly Medicare-enrolled MIPS eligible
clinicians, QPs, certain Partial QPs, and clinicians that fall under
the finalized low-volume threshold.
For the 2017 performance period, we estimate that more than half of
clinicians--approximately 738,000 to 780,000--billing under the
Medicare PFS will be excluded from MIPS due to several factors,
including the MACRA itself. We estimate that nearly 200,000 clinicians,
or approximately 14.4 percent, are not one of the eligible types of
clinicians for the transition year CY 2017 of MIPS under section
1848(q)(1)(C) of the Act. The largest cohort of clinicians excluded
from MIPS is low-volume clinicians, defined as those clinicians with
less than or equal to $30,000 in allowed charges or less than or equal
to 100 Medicare patients, representing approximately 32.5 percent of
all clinicians billing Medicare Part B services or over 380,000
clinicians. Additionally, between 70,000 and 120,000 clinicians
(approximately 5-8 percent of all clinicians billing under the Medicare
Part B) will be excluded from MIPS due to being QPs based on
participation in Advanced APMs. In aggregate, the eligible clinicians
excluded from MIPS represent only 22 to 27 percent of total Part B
allowed charges.
This rule finalizes MIPS performance standards and a minimum MIPS
performance period of any 90 continuous days during CY 2017 (January 1
through December 31) for all measures and activities applicable to the
integrated performance categories. After consideration of public
comments, this rule finalizes a shorter than annual performance period
in 2017 to allow flexible participation options for MIPS eligible
clinicians as the program begins and evolves over time. For performance
periods occurring in 2017, MIPS eligible clinicians will be able to
pick a pace of participation that best suits their practices, including
submitting data, in special circumstances as discussed in section
II.E.5. of this rule, for a period of less than 90 days, to avoid a
negative MIPS payment adjustment. Further, we are finalizing our
proposal to use performance in 2017 as the performance period for the
2019 payment adjustment. Therefore, the first performance period will
start in 2017 and consist of a minimum period of any 90 continuous days
during the calendar year in order for clinicians to be eligible for
payment adjustment above neutral. Performance in that period of 2017
will be used to determine the 2019 payment adjustment. This timeframe
is needed to allow data and claims to be submitted and data analysis to
occur in the initial years. In subsequent years, we intend to explore
ways to shorten the period between the performance period and the
payment year, and ongoing performance feedback will be provided more
frequently. The final policies for CY 2017 provide flexibilities to
ensure clinicians have ample participation opportunities.
As directed by the MACRA, this rule finalizes measures, activities,
reporting, and data submission standards across four integrated
performance categories: Quality, cost, improvement activities, and
advancing care information, each linked by the same overriding mission
of supporting care improvement under the vision of one Quality Payment
Program. Consideration will be given to the application of measures and
activities to non-patient facing MIPS eligible clinicians.
Under the requirements finalized in this rule, there will be
options for reporting as an individual MIPS eligible clinician or as
part of a group. Some data may be submitted via relevant third party
intermediaries, such as qualified clinical data registries (QCDRs),
health IT vendors,\1\ qualified registries, and CMS-approved survey
vendors.
---------------------------------------------------------------------------
\1\ We also note that throughout this final rule, as in the
proposed rule, we use the terms ``EHR Vendor'' and ``Health IT
Vendor.'' First, the use of the term ``health IT'' and ``EHR'' are
based on the common terminology within the specified program (see 80
FR 62604; and the advancing care information performance category in
this rule). Second, we recognize that a ``health IT vendor'' may or
may not also be a ``health IT developer'' and, in some cases, the
developer and the vendor of a single product may be different
entities. Under the ONC Health IT Certification Program (Program), a
health IT developer constitutes a vendor, self-developer, or other
entity that presents health IT for certification or has health IT
certified under the Program. Therefore, for purposes of this final
rule, we clarify that the term ``vendor'' shall also include
developers who create or develop health IT. Throughout this final
rule, we use the term ``health IT vendor'' or ``EHR vendor'' to
refer to entities that support the health IT requirements of a MIPS
eligible clinician participating in the proposed Quality Payment
Program. This use is consistent with prior CMS rules, see for
example the 2014 CEHRT Flexibility final rule (79 FR 52915).
---------------------------------------------------------------------------
[[Page 77015]]
Within each performance category, we are finalizing specific
requirements for full participation in MIPS which involves submitting
data on quality measures, improvement activities, and use of certified
EHR technology on a minimum of any continuous 90 days up to the full
calendar year in 2017 in order to be eligible for a positive MIPS
payment adjustment. It is at the MIPS eligible clinician's discretion
whether to submit data for the same 90-day period for the various
measures and activities or for different time periods for different
measures and activities. Note that during the 2017 transition year,
MIPS eligible clinicians may choose to report a minimum of a single
measure in the quality performance category, a single activity in the
improvement activities performance category or the required measures in
the advancing care information performance category, in order to avoid
a negative payment adjustment. For full participation in MIPS, the
specific requirements are as follows:
(i) Quality
Quality measures will be selected annually through a call for
quality measures process, and a final list of quality measures will be
published in the Federal Register by November 1 of each year. For MIPS
eligible clinicians choosing full participation in MIPS and the
potential for a higher payment adjustment, we note that for a minimum
of a continuous 90-day performance period, the MIPS eligible clinician
or group will report at least six measures including at least one
outcome measure if available. If fewer than six measures apply to the
individual MIPS eligible clinician or group, then the MIPS eligible
clinician or group will only be required to report on each measure that
is applicable.
Alternatively, for a minimum of a continuous 90-day period, the
MIPS eligible clinician or group can report one specialty-specific
measure set, or the measure set defined at the subspecialty level, if
applicable. If the measure set contains fewer than six measures, MIPS
eligible clinicians will be required to report all available measures
within the set. If the measure set contains six or more measures, MIPS
eligible clinicians can choose six or more measures to report within
the set. Regardless of the number of measures that are contained in the
measure set, MIPS eligible clinicians reporting on a measure set will
be required to report at least one outcome measure or, if no outcome
measures are available in the measure set, report another high priority
measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) within the measure set in
lieu of an outcome measure.
(ii) Improvement Activities
Improvement activities are those that support broad aims within
healthcare delivery, including care coordination, beneficiary
engagement, population management, and health equity. In response to
comments from experts and stakeholders across the healthcare system,
improvement activities were given relative weights of high and medium.
We are reducing the number of activities required to achieve full
credit from six medium-weighted or three high-weighted activities to
four medium-weighted or two high-weighted activities to receive full
credit in this performance category in CY 2017. For small practices,
rural practices, or practices located in geographic health professional
shortage areas (HPSAs), and non-patient facing MIPS eligible
clinicians, we will reduce the requirement to only one high-weighted or
two medium-weighted activities. We also expand our definition of how
CMS will recognize a MIPS eligible clinician or group as being a
certified patient-centered medical home or comparable specialty
practice to include certification from a national program, regional or
state program, private payer or other body that administers patient-
centered medical home accreditation. As previously mentioned, in
recognition of improvement activities as supporting the central mission
of a unified Quality Payment Program, we will include a designation in
the inventory of improvement activities of which activities also
qualify for the advancing care information bonus score, consistent with
our desire to recognize that EHR technology is often deployed to
improve care in ways that our programs should recognize.
(iii) Advancing Care Information Performance Category
Measures and objectives in the advancing care information
performance category focus on the secure exchange of health information
and the use of certified electronic health record technology (CEHRT) to
support patient engagement and improved healthcare quality. We are
maintaining alignment of the advancing care information performance
category with the other integrated performance categories for MIPS. We
are reducing the total number of required measures from eleven in the
proposed rule to only five in our final policy. All other measures
would be optional for reporting. Reporting on all five of the required
measures would earn the MIPS eligible clinician 50 percent. Reporting
on the optional measures would allow a clinician to earn a higher
score. For the transition year, we will award a bonus score for
improvement activities that utilize CEHRT and for reporting to public
health or clinical data registries.
Public commenters requested that the advancing care information
performance category allow for reporting on ``use cases'' such as the
use of CEHRT to manage referrals and consultations (``closing the
referral loop'') and other practice-based activities for which CEHRT is
used as part of the typical workflow. This is an area we intend to
explore in future rulemaking but did not finalize any such policies in
this rule. However, for the 2017 transition year, we will award bonus
points for improvement activities that utilize CEHRT and for reporting
to a public health or clinical data registry, reflecting the belief
that the advancing care information performance category should align
with the other performance categories to achieve the unified goal of
quality improvement.
(iv) Cost
For the transition year, we are finalizing a weight of zero percent
for the cost performance category in the final score, and MIPS scoring
in 2017 will be determined based on the other three integrated MIPS
performance categories. Cost measures do not require reporting of any
data by MIPS eligible clinicians to CMS. Although cost measures will
not be used to determine the final score in the transition year, we
intend to calculate performance on certain cost measures and give this
information in performance feedback to clinicians. We intend to
calculate measures of total per capita costs for all attributed
beneficiaries and a Medicare Spending per Beneficiary (MSPB) measure.
In addition, we are finalizing 10 episode-based measures that were
previously made available to clinicians in feedback reports and met
standards for reliability. Starting in performance year 2018, as
performance feedback is available on at least an annual basis, the cost
performance category contribution to the final score will gradually
increase from 0 to the 30 percent level required
[[Page 77016]]
by MACRA by the third MIPS payment year of 2021.
(v) Clinicians in MIPS APMs
We are finalizing standards for measures, scoring, and reporting
for MIPS eligible clinicians across all four performance categories
outlined in this section II.E.5.h of this final rule with comment
period. Beginning in 2017, some APMs, by virtue of their structure,
will not meet statutory requirements to be categorized as Advanced
APMs. Eligible clinicians in these APMs, hereafter referred to as MIPS
APMs, will be subject to MIPS reporting requirements and the MIPS
payment adjustment. In addition, eligible clinicians who are in
Advanced APMs but do not meet participation thresholds to be excluded
from MIPS for a year will be subject to the scoring standards for MIPS
reporting requirements and the MIPS payment adjustment. In response to
comments, in an effort to recognize these eligible clinicians'
participation in delivery system reform and to avoid potential
duplication or conflicts between these APMs and MIPS, we finalize an
APM scoring standard that is different from the generally applicable
standard. We finalize our proposal that MIPS eligible clinicians who
participate in MIPS APMs will be scored using the APM scoring standard
instead of the generally applicable MIPS scoring standard.
(vi) Scoring Under MIPS
We are finalizing that MIPS eligible clinicians have the
flexibility to submit information individually or via a group or an APM
Entity group; however, the MIPS eligible clinician will use the same
identifier for all performance categories. The finalized scoring
methodology has a unified approach across all performance categories,
which will help MIPS eligible clinicians understand in advance what
they need to do in order to perform well in MIPS. The three performance
category scores (quality, improvement activities, and advancing care
information) will be aggregated into a final score. The final score
will be compared against a MIPS performance threshold of 3 points. The
final score will be used to determine whether a MIPS eligible clinician
receives an upward MIPS payment adjustment, no MIPS payment adjustment,
or a downward MIPS payment adjustment as appropriate. Upward MIPS
payment adjustments may be scaled for budget neutrality, as required by
MACRA. The final score will also be used to determine whether a MIPS
eligible clinician qualifies for an additional positive adjustment
factor for exceptional performance. The performance threshold will be
set at 3 points for the transition year, such that clinicians engaged
in the program who successfully report one quality measure can avoid a
downward adjustment. MIPS eligible clinicians submitting additional
data for one or more of the three performance categories for at least a
full 90-day period may quality for varying levels of positive
adjustments.
In future years of the program, we will require longer performance
periods and higher performance in order to avoid a negative MIPS
payment adjustment.
(vii) Performance Feedback
We are finalizing a process for providing performance feedback to
MIPS eligible clinicians. Initially, we will provide performance
feedback on an annual basis. In future years, we aim to provide
performance feedback on a more frequent basis, as well as providing
feedback on the performance categories of improvement activities and
advancing care information in line with clinician requests for timely,
actionable feedback that they can use to improve care. We are
finalizing our proposal to make performance feedback available using a
web-based application. Further, we are finalizing our proposal to
leverage additional mechanisms such as health IT vendors and registries
to help disseminate data contained in the performance feedback to MIPS
eligible clinicians where applicable.
(viii) Targeted Review Processes
We are finalizing a targeted review process under MIPS wherein a
MIPS eligible clinician may request that we review the calculation of
the MIPS payment adjustment factor and, as applicable, the calculation
of the additional MIPS payment adjustment factor applicable to such
MIPS eligible clinician for a year.
(ix) Third Party Intermediaries
We are finalizing requirements for third party data submission to
MIPS that are intended to decrease burden to individual clinicians.
Specifically, qualified registries, QCDRs, health IT vendors, and CMS-
approved survey vendors will have the ability to act as intermediaries
on behalf of MIPS eligible clinicians and groups for submission of data
to CMS across the quality, improvement activities, and advancing care
information performance categories.
(x) Public Reporting
We are finalizing a process for public reporting of MIPS
information through the Physician Compare Web site, with the intention
of promoting fairness and transparency. We are finalizing public
reporting of a MIPS eligible clinician's data; for each program year,
we will post on a public Web site, in an easily understandable format,
information regarding the performance of MIPS eligible clinicians or
groups under MIPS.
5. Payment Adjustments
We estimate that approximately 70,000 to 120,000 clinicians will
become QPs in 2017 and approximately 125,000 to 250,000 clinicians will
become QPs in 2018 through participation in Advanced APMs; they are
estimated to receive between $333 million and $571 million in APM
Incentive Payments for CY 2019. As with MIPS, we expect that APM
participation will drive quality improvement for clinical care provided
to Medicare beneficiaries and to all patients in the health care
system.
Under the policies finalized in this rule, we estimate that,
between approximately 592,000 and 642,000 eligible clinicians will be
required to participate in MIPS in its transition year. In 2019, MIPS
payment adjustments will be applied based on MIPS eligible clinicians'
performance on specified measures and activities within three
integrated performance categories; the fourth category of cost, as
previously outlined, will be weighted to zero in the transition year.
Assuming that 90 percent of eligible clinicians of all practice sizes
participate in the program, we estimate that MIPS payment adjustments
will be approximately equally distributed between negative MIPS payment
adjustments ($199 million) and positive MIPS payment adjustments ($199
million) to MIPS eligible clinicians, to ensure budget neutrality.
Positive MIPS payment adjustments will also include an additional $500
million for exceptional performance payments to MIPS eligible
clinicians whose performance meets or exceeds a threshold final score
of 70. These MIPS payment adjustments are expected to drive quality
improvement in the provision of MIPS eligible clinicians' care to
Medicare beneficiaries and to all patients in the health care system.
However, the distribution could change based on the final population of
MIPS eligible clinicians for CY 2019 and the distribution of scores
under the program. We believe that starting with these modest initial
MIPS payment adjustments, representing less than 0.2 percent of
Medicare expenditures for physician and clinical services, is in the
long-term best interest of maximizing
[[Page 77017]]
participation and starting the Quality Payment Program off on the right
foot, even if it limits the upside during the transition year. The
increased availability of Advanced APM opportunities, including through
Medical Home models, also provides earlier avenues to earn bonus
payments for those who choose to participate.
6. The Broader Context of Delivery System Reform and Healthcare System
Innovation
In January 2015, the Administration announced new goals for
transforming Medicare by moving away from traditional FFS payments in
Medicare towards a payment system focused on linking physician
reimbursements to quality care through APMs (http://www.hhs.gov/about/news/2015/01/26/better-smarter-healthier-in-historic-announcement-hhs-sets-clear-goals-and-timeline-for-shifting-medicare-reimbursements-from-volume-to-value.html#) and other value-based purchasing
arrangements. This is part of an overarching Administration strategy to
transform how health care is delivered in America, changing payment
structures to improve quality and patient health outcomes. The policies
finalized in this rule are intended to continue to move Medicare away
from a primarily volume-based FFS payment system for physicians and
other professionals.
The Affordable Care Act includes a number of provisions, for
example, the Medicare Shared Savings Program, designed to improve the
quality of Medicare services, support innovation and the establishment
of new payment models, better align Medicare payments with health care
provider costs, strengthen Medicare program integrity, and put Medicare
on a firmer financial footing.
The Affordable Care Act created the Center for Medicare and
Medicaid Innovation (Innovation Center). The Innovation Center was
established by section 1115A of the Act (as added by section 3021 of
the Affordable Care Act). The Innovation Center's mandate gives it
flexibility within the parameters of section 1115A of the Act to select
and test promising innovative payment and service delivery models. The
Congress created the Innovation Center for the purpose of testing
innovative payment and service delivery models to reduce program
expenditures while preserving or enhancing the quality of care provided
to those individuals who receive Medicare, Medicaid, or CHIP benefits.
See https://innovation.cms.gov/about/index.html. The Secretary may
through rulemaking expand the duration and scope of a model being
tested if (1) the Secretary finds that such expansion (i) is expected
to reduce spending without reducing the quality of care, or (ii)
improve the quality of patient care without increasing spending; (2)
the CMS Chief Actuary certifies that such expansion would reduce (or
would not result in any increase in) net program spending under
applicable titles; and (3) the Secretary finds that such expansion
would not deny or limit the coverage or provision of benefits under the
applicable title for applicable individuals.
The Innovation Center's portfolio of models has attracted
participation from a broad array of health care providers, states,
payers, and other stakeholders, and serves Medicare, Medicaid, and CHIP
beneficiaries in all 50 states, the District of Columbia, and Puerto
Rico. We estimate that over 4.7 million Medicare, Medicaid, and CHIP
beneficiaries are or soon will be receiving care furnished by the more
than 61,000 eligible clinicians currently participating in models
tested by the CMS Innovation Center.
Beyond the care improvements for these beneficiaries, the
Innovation Center models are affecting millions of additional Americans
by engaging thousands of other health care providers, payers, and
states in model tests and through quality improvement efforts across
the country. Many payers other than CMS have implemented alternative
payment arrangements or models, or have collaborated in the Innovation
Center models. The participation of multiple payers in alternative
delivery and payment models increases momentum for delivery system
transformation and encourages efficiency for health care organizations.
The Innovation Center works directly with other CMS components and
colleagues throughout the federal government in developing and testing
new payment and service delivery models. Other federal agencies with
which the Innovation Center has collaborated include the Centers for
Disease Control and Prevention (CDC), Health Resources and Services
Administration (HRSA), Agency for Healthcare Research and Quality
(AHRQ), Office of the National Coordinator for Health Information
Technology (ONC), Administration for Community Living (ACL), Department
of Housing and Urban Development (HUD), Administration for Children and
Families (ACF), and the Substance Abuse and Mental Health Services
Administration (SAMHSA). These collaborations help the Innovation
Center effectively test new models and execute mandated demonstrations.
7. Stakeholder Input
In developing this final rule with comment period, we sought
feedback from stakeholders and the public throughout the process such
as in the 2016 Medicare PFS Proposed Rule; the Request for Information
Regarding Implementation of the Merit-based Incentive Payment System,
Promotion of Alternative Payment Models, and Incentive Payments for
Participation in Eligible Alternative Payment Models (hereafter
referred to as the MIPS and APMs RFI); listening sessions;
conversations with a wide number of stakeholders; and consultation with
tribes and tribal officials through an All Tribes' Call on May 19, 2016
and several conversations with the CMS' Tribal Technical Advisory
Group. Through the MIPS and APMs RFI published in the Federal Register
on October 1, 2015 (80 FR 59102 through 59113), the Secretary of Health
and Human Services (the Secretary) solicited comments regarding
implementation of certain aspects of the MIPS and broadly sought public
comments on the topics in section 101 of the MACRA, including the
incentive payments for participation in APMs and increasing
transparency of PFPMs. We received numerous public comments in response
to the MIPS and APMs RFI from a broad range of sources including
professional associations and societies, physician practices,
hospitals, patient groups, and health IT vendors. On May 9, 2016, we
published in the Federal Register a proposed rule for the Merit-based
Incentive Payment System and Alternative Payment Model Incentive under
the Physician Fee Schedule, and Criteria for Physician-Focused Payment
Models (81 FR 28161 through 28586). In our proposed rule, we provided
the public with proposed policies, implementation strategies, and
regulation text, in addition to seeking additional comments on
alternative and future approaches for MIPS and APMs. The comment period
closed June 27, 2016.
In response to both the RFI and the proposed rule, we received a
high degree of interest from a broad spectrum of stakeholders. We thank
our many commenters and acknowledge their valued input throughout the
proposed rule process. We discuss and respond to the substance of
relevant comments in the appropriate sections of this final rule with
comment period. In general, commenters continue to support
establishment of the Quality Payment Program and maintain optimism as
we
[[Page 77018]]
move from FFS Medicare payment towards an enhanced focus on the quality
and value of care. Public support for our proposed approach and
policies in the proposed rule focused on the potential for improving
the quality of care delivered to beneficiaries and increasing value to
the public--while rewarding eligible clinicians for their efforts. In
this early stage of a new program, commenters urged CMS to maintain
flexibility and promote maximized clinician participation in MIPS and
APMs. Commenters also expressed a willingness and desire to work with
CMS to increase the relevance of MIPS activities and measures for
physicians and patients and to expand the number and scope of APMs. We
have sought to adopt these sentiments throughout relevant sections of
this final rule with comment period. Commenters continue to express
concern with elements of the legacy programs incorporated into MIPS. We
appreciate the many comments received regarding the proposed measures
and activities and address those throughout this final rule with
comment period. We intend to work with stakeholders to continually seek
to connect the program to activities and measures that will result in
improvement in care for Medicare beneficiaries. Commenters also
continue to be concerned regarding the burden of current and future
requirements. Although many commenters recognize the reduced burden
from streamlined reporting in MIPS compared to prior programs, they
believe CMS could undertake additional steps to improve reporting
efficiency. We appreciate provider concerns with reporting burden and
have tried to reduce burden where possible while meeting the intent of
the MACRA, including our obligations to improve patient outcomes
through this quality program.
In several cases, commenters made suggestions for changes that we
considered and ultimately found to be inconsistent with the statute. In
keeping with our objectives of maintaining transparency in the program,
we outline in the appropriate sections of the rule suggestions from
commenters that were considered but found to be inconsistent with the
statute.
Commenters have many concerns about their ability to participate
effectively in MIPS in 2017 and the program's impacts on small
practices, rural practitioners, and various specialty practitioner
types. We have attempted to address these concerns by including
transitional policies and additional flexibility in relevant sections
of the final rule with comment period to encourage participation by all
eligible clinicians and practitioner types, and avoid undue impact on
any particular group.
Commenters present substantial enthusiasm for broadening
opportunities to participate in APMs and the development of new
Advanced APMs. Commenters suggest a number of resources should be made
available to assist them in moving towards participation in APMs and
have submitted numerous proposals for enhancing the APM portfolio and
shortening the development process for new APMs. In particular,
commenters urged us to modify existing Innovation Center models so they
can be classified as Advanced APMs. We appreciate commenters' eagerness
to participate in Advanced APMs and to be a part of transforming care.
While not within the scope of this rule, we note that CMS has developed
in conjunction with this rule a new strategic vision for the
development of Advanced APMs over the coming years that will provide
significantly enhanced opportunities for clinicians to participate in
the program. We thank stakeholders again for their considered responses
throughout our process, in various venues, including comments to the
MIPS and APMs RFI and the proposed rule. We intend to continue open
communication with stakeholders, including consultation with tribes and
tribal officials, on an ongoing basis as we develop the Quality Payment
Program in future years.
II. Provisions of the Proposed Regulations and Analysis of and
Responses to Comments
A. Establishing MIPS and the Advanced APM Incentive
Section 1848(q) of the Act, as added by section 101(c) of the
MACRA, requires establishment of MIPS. Section 101(e) of the MACRA
promotes the development of, and participation in, Advanced APMs for
eligible clinicians.
B. Program Principles and Goals
Through the implementation of the Quality Payment Program, we
strive to continue to support health care quality, efficiency, and
patient safety. MIPS promotes better care, healthier people, and
smarter spending by evaluating MIPS eligible clinicians using a final
score that incorporates MIPS eligible clinicians' performance on
quality, cost, improvement activities, and advancing care information.
Under the incentives for participation in Advanced APMs, our goals,
described in greater detail in section II.F of this final rule with
comment period, are to expand the opportunities for participation in
both APMs and Advanced APMs, improve care quality and reduce health
care costs in current and future Advanced APMs, create clear and
attainable standards for incentives, promote the continued flexibility
in the design of APMs, and support multi-payer initiatives across the
health care market. The Quality Payment Program is designed to
encourage eligible clinicians to participate in Advanced APMs. The APM
Incentive Payment will be available to eligible clinicians who qualify
as QPs through Advanced APMs. MIPS eligible clinicians participating in
APMs (who do not qualify as QPs) will receive favorable scoring under
certain MIPS categories.
Our strategic objectives in developing the Quality Payment Program
include: (1) Improve beneficiary outcomes through patient-centered MIPS
and APM policy development and patient engagement and achieve smarter
spending through strong incentives to provide the right care at the
right time; (2) enhance clinician experience through flexible and
transparent program design and interactions with exceptional program
tools; (3) increase the availability and adoption of alternative
payment models; (4) promote program understanding and participation
through customized communication, education, outreach and support; (5)
improve data and information sharing to provide accurate, timely, and
actionable feedback to clinicians and other stakeholders; (6) deliver
IT systems capabilities that meet the needs of users and are seamless,
efficient and valuable on the front- and back-end; and (7) ensure
operational excellence in program implementation and ongoing
development.
C. Changes to Existing Programs
1. Sunsetting of Current Payment Adjustment Programs
Section 101(b) of the MACRA calls for the sunsetting of payment
adjustments under three existing programs for Medicare enrolled
physicians and other practitioners:
The PQRS that incentivizes EPs to report on quality
measures;
The VM that provides for budget neutral, differential
payment adjustment for EPs in physician groups and solo practices based
on quality of care compared to cost; and
The Medicare EHR Incentive Program for EPs that entails
meeting certain requirements for the use of CEHRT.
Accordingly, we are finalizing revisions to certain regulations
[[Page 77019]]
associated with these programs. We are not deleting these regulations
entirely, as the final payment adjustments under these programs will
not occur until the end of 2018. For PQRS, we are revising Sec.
414.90(e) introductory text and Sec. 414.90(e)(1)(ii) to continue
payment adjustments through 2018.
Similarly, for the Medicare EHR Incentive Program for EPs we are
amending Sec. 495.102(d) to remove references to the payment
adjustment percentage for years after the 2018 payment adjustment year
and add a terminal limit of the 2018 payment adjustment year.
We did not make changes to 42 CFR part 414, subpart N--Value-Based
Payment Modifier Under the PFS (Sec. Sec. 414.1200 through 414.1285).
These regulations are already limited to certain years.
The following is a summary of the comments we received regarding
sunsetting current payment adjustment programs:
Comment: Several commenters expressed appreciation for CMS's
decision to streamline the prior reporting programs into MIPS.
Response: We appreciate the commenters support for our proposals.
Comment: Some commenters were confused by the term ``sunsetting,''
the timeline for when the prior programs ``end,'' and whether there
would be an overlap in reporting.
Response: Because of the nature of regulatory text and statutory
requirements, we cannot delete text from the public record in order to
end or change regulatory programs. Instead, we must amend the text with
a date that marks an end to the program, and we refer to this as
``sunsetting.'' We would also like to clarify that the PQRS, VM, and
Medicare EHR Incentive Program for FFS EPs will ``end'' in 2018 because
that is the final year in which payment adjustments for each of these
programs will be applied. As the commenters noted, however, the
reporting periods or performance periods associated with the 2018
payment year for each of these programs occur prior to 2018. As
discussed in section II.E.4. of this final rule with comment period,
beginning in 2017, MIPS eligible clinicians will report data for MIPS
during at minimum any period of 90 continuous days within CY 2017, and
MIPS payment adjustments will begin in 2019 based on the 2017
performance year. Eligible clinicians may also seek to qualify as QPs
through participation in Advanced APMs. Eligible clinicians who are QPs
for the year are not subject to the MIPS reporting requirements and
payment adjustment.
We plan to provide additional educational materials so that
clinicians can easily understand the timelines and requirements for the
existing and the new programs.
Based on the comments received we are finalizing the revision to
PQRS at Sec. 414.90(e) introductory text and Sec. 414.90(e)(1)(ii)
and to the Medicare EHR Incentive Program at Sec. 495.102(d) as
proposed.
2. Supporting Health Care Providers With the Performance of Certified
EHR Technology, and Supporting Health Information Exchange and the
Prevention of Health Information Blocking
a. Supporting Health Care Providers With the Performance of Certified
EHR Technology
We proposed to require EPs, eligible hospitals, and CAHs to attest
(as part of their demonstration of meaningful use under the Medicare
and Medicaid EHR Incentive Programs) that they have cooperated with the
surveillance and direct review of certified EHR technology under the
ONC Health IT Certification Program, as authorized by 45 CFR part 170,
subpart E. Similarly, we proposed to require such an attestation from
all eligible clinicians under the advancing care information
performance category of MIPS, including eligible clinicians who report
on the advancing care information performance category as part of an
APM Entity group under the APM scoring standard.
As we note below, it is our intent to support MIPS eligible
clinicians, eligible clinicians part of an APM Entity, EPs, eligible
hospitals, and CAHs' (hereafter collectively referred to in this
section as ``health care providers'') participation in health IT
surveillance and direct review activities. While cooperating with these
activities may require prioritizing limited time and other resources,
we note that ONC will work with health care providers to accommodate
their schedules and consider other circumstances (80 FR 62715).
Additionally, ONC has established certain safeguards that can minimize
potential burden on health care providers in the event that they are
asked to cooperate with the surveillance of their certified EHR
technology. Examples of these safeguards, which we described in the
proposed rule (81 FR 28171), include: (1) Requiring ONC-Authorized
Certification Bodies (ONC-ACBs) to use consistent, objective, valid,
and reliable methods when selecting locations at which to perform
randomized surveillance of certified health IT (80 FR 62715); (2)
allowing ONC-ACBs to use appropriate sampling methodologies to minimize
disruption to any individual provider or class of providers and to
maximize the value and impact of ONC-ACB surveillance activities for
all providers and stakeholders (80 FR 62715); and (3) allowing ONC-ACBs
to excuse a health care provider from surveillance and select a
different health care provider under certain circumstances (80 FR
62716).
As background to this proposal, we noted that on October 16, 2015,
ONC published the 2015 Edition Health Information Technology (Health
IT) Certification Criteria, 2015 Edition Base Electronic Health Record
(EHR) Definition, and ONC Health IT Certification Program Modifications
final rule (``2015 Edition final rule''). The 2015 Edition final rule
made changes to the ONC Health IT Certification Program that enhance
the testing, certification, and surveillance of health IT. Importantly,
the rule strengthened requirements for the ongoing surveillance of
certified EHR technology and other health IT certified on behalf of
ONC. Under these requirements established by the 2015 Edition final
rule, ONC-ACBs are required to conduct more frequent and more rigorous
surveillance of certified technology and capabilities ``in the field''
(80 FR 62707).
The purpose of in-the-field surveillance is to provide greater
assurance that health IT meets certification requirements not only in a
controlled testing environment, but also when used by health care
providers in actual production environments (80 FR 62707). In-the-field
surveillance can take two forms: First, ONC-ACBs conduct ``reactive
surveillance'' in response to complaints or other indications that
certified health IT may not conform to the requirements of its
certification (45 CFR 170.556(b)). Second, ONC-ACBs carry out ongoing
``randomized surveillance'' based on a randomized sample of all
certified Complete EHRs and Health IT Modules to assess certified
capabilities and other requirements prioritized by the National
Coordinator (45 CFR 170.556(c)). Consistent with the purpose of ONC-ACB
surveillance--which is to verify that certified health IT performs in
accordance with the requirements of its certification when it is
implemented and used in the field--an ONC-ACB's assessment of a
certified capability must be based on the use of the capability in the
live production environment in which the capability has been
implemented and is in use (45 CFR
[[Page 77020]]
170.556(a)(1)) and must use production data unless test data is
specifically approved by the National Coordinator (45 CFR
170.556(a)(2)). Throughout this section, we refer to surveillance by an
ONC-ACB as ``surveillance.''
On October 19, 2016, ONC will publish the ONC Enhanced Oversight
and Accountability final rule, which enhances oversight under the ONC
Health IT Certification Program by establishing processes to facilitate
ONC's direct review and evaluation of the performance of certified
health IT in certain circumstances, including in response to problems
or issues that could pose serious risks to public health or safety (see
the October 19, 2016 Federal Register). ONC's direct review of
certified health IT may require ONC to review and evaluate the
performance of health IT in the production environment in which it has
been implemented. Throughout this section, we refer to actions carried
out by ONC under the ONC Enhanced Oversight and Accountability final
rule as ``direct review.''
When carrying out ONC-ACB surveillance or ONC direct review, ONC-
ACBs and/or ONC may request that health care providers supply
information (for example, by way of telephone inquiries or written
surveys) about the performance of the certified EHR technology
capabilities the provider possesses and, when necessary, may request
access to the provider's certified EHR technology (and data stored in
such certified EHR technology) to confirm that capabilities certified
by the developer are functioning appropriately. Health care providers
may also be asked to demonstrate capabilities and other aspects of the
technology that are the focus of such efforts.
In the Quality Payment Program proposed rule, we explained that
these efforts to strengthen surveillance and direct review of certified
health IT are critical to the success of HHS programs and initiatives
that require the use of certified health IT to improve health care
quality and the efficient delivery of care. We explained that effective
ONC-ACB surveillance and ONC direct review is fundamental to providing
basic confidence that the certified health IT used under the HHS
programs consistently meets applicable standards, implementation
specifications, and certification criteria adopted by the Secretary
when it is used by health care providers, as well as by other persons
with whom health care providers need to exchange electronic health
information to comply with program requirements. In particular, the
need to ensure that certified health IT consistently meets applicable
standards, implementation specifications, and certification criteria is
important both at the time the technology is certified (by meeting the
requirements for certification in a controlled testing environment) and
on an ongoing basis to ensure that the technology continues to meet
certification requirements when it is actually implemented and used by
health care providers in real-world production environments. We
explained that efforts to strengthen surveillance and direct review of
certified EHR technology in the field will become even more important
as the types and capabilities of certified EHR technology continue to
evolve and with the onset of Stage 3 of the Medicare and Medicaid EHR
Incentive Programs and MIPS, which include heightened requirements for
sharing electronic health information with other providers and with
patients. Finally, we noted that effective surveillance and direct
review of certified EHR technology is necessary if health care
providers are to be able to rely on certifications issued under the ONC
Health IT Certification Program as the basis for selecting appropriate
technologies and capabilities that support the use of certified EHR
technology while avoiding potential implementation and performance
issues (81 FR 28170-28171).
For all of these reasons, the effective surveillance and direct
review of certified health IT, and certified EHR technology as it
applies to providers covered by this provision, provide greater
assurance to health care providers that their certified EHR technology
will perform in a manner that meets their expectations and that will
enable them to demonstrate that they are using certified EHR technology
in a meaningful manner as required by sections 1848(o)(2)(A)(i) and
1886(n)(3)(A)(i) of the Act. We stressed in the proposed rule (81 FR
28170-28171), however, that such surveillance and direct review will
not be effective unless health care providers are actively engaged and
cooperate with these activities, including by granting access to and
assisting ONC-ACBs and ONC to observe the performance of production
systems (see also the 2015 Edition final rule at 80 FR 62716).
Accordingly, we proposed that as part of demonstrating the use of
certified EHR technology in a meaningful manner, a health care provider
must demonstrate its good faith cooperation with authorized
surveillance and direct review. We proposed to revise the definition of
a meaningful EHR user at Sec. 495.4 as well as the attestation
requirements at Sec. 495.40(a)(2)(i)(H) and Sec. 495.40(b)(2)(i)(H)
to require EPs, eligible hospitals, and CAHs to attest their
cooperation with certain authorized health IT surveillance and direct
review activities as part of demonstrating meaningful use under the
Medicare and Medicaid EHR Incentive Programs. Similarly, we proposed to
include an identical attestation requirement in the submission
requirements for MIPS eligible clinicians under the advancing care
information performance category proposed at Sec. 414.1375.
We proposed that health care providers would be required to attest
that they have cooperated in good faith with the authorized ONC-ACB
surveillance and ONC direct review of their health IT certified under
the ONC Health IT Certification Program, as authorized by 45 CFR part
170, subpart E, to the extent that such technology meets (or can be
used to meet) the definition of CEHRT. Under the terms of the
attestation, we stated that such cooperation would include responding
in a timely manner and in good faith to requests for information (for
example, telephone inquiries and written surveys) about the performance
of the certified EHR technology capabilities in use by the provider in
the field (81 FR 28170 through 28171). It would also include
accommodating requests (from ONC-ACBs or from ONC) for access to the
provider's certified EHR technology (and data stored in such certified
EHR technology) as deployed by the health care provider in its
production environment, for the purpose of carrying out authorized
surveillance or direct review, and to demonstrate capabilities and
other aspects of the technology that are the focus of such efforts, to
the extent that doing so would not compromise patient care or be unduly
burdensome for the health care provider.
We stated that the proposed attestation would support providers in
meeting the requirements for the meaningful use of certified EHR
technology while at the same time minimizing burdens for health care
providers and patients (81 FR 28170 through 28171). We requested public
comment on this proposal.
Through public forums, listening sessions, and correspondence
received by CMS and ONC, and through the methods available for health
care providers to submit \2\ technical concerns related to the function
of their certified EHR technology, we have received requests that ONC
and CMS assist providers in mitigating issues with the performance of
their technology,
[[Page 77021]]
including issues that relate to the safety and interoperability of
health IT. Our proposal was designed to help health care providers with
these very issues by strengthening participation in surveillance and
direct review activities that help assure that their certified EHR
technology performs as intended. However, the comments we have
received, and which we discuss below, suggest that the support that the
policy provides for health IT performance was not understood by some
stakeholders. For this reason, we are adopting a modification to the
title and language describing this policy in this final rule with
comment period to reflect the intent articulated in the proposed rule
and to be responsive to the concerns raised by commenters.
As we have explained, our proposal to require that health care
providers cooperate with ONC-ACB surveillance of certified health IT
and ONC direct review of certified health IT reflects the need to
address technical issues with the functionality of certified EHR
technology and to support health care providers with the performance of
their certified EHR technology. By cooperating with these activities,
health care providers would assist ONC-ACBs and ONC in working with
health IT developers to identify and rectify problems and issues with
their technology. In addition, a health care provider who assists an
ONC-ACB or ONC with these activities is also indirectly supporting
other health care providers, interoperability goals, and the health IT
infrastructure by helping to ensure the integrity and efficacy of
certified health IT products in health care settings. To more clearly
and accurately communicate the context and role of health care
providers in these activities, and consistent with our approach to
clarifying terminology and references, we have adopted new terminology
in this final rule with comment period that focuses on the requirements
for the health care provider rather than ONC or ONC-ACB actions and
processes. In this section, the activities to be engaged in by health
care providers in cooperation with ONC direct review or ONC-ACB
surveillance are intended to support health care providers with the
performance of certified EHR technology. We therefore use the phrase
``Supporting Providers with the Performance of Certified EHR technology
activities'' (hereinafter referred to as ``SPPC activities'') to refer
to a health care provider's actions related to cooperating in good
faith with ONC-ACB authorized surveillance and, separately or
collectively as the context requires, a health care provider's actions
in cooperating in good faith with ONC direct review.
Notwithstanding the terminology used in this final rule with
comment period, and to avoid any confusion for health care providers
engaging with ONC-ACBs or ONC in the future, we note that, when
communicating with health care providers about the surveillance or
direct review of certified health IT, ONC-ACBs and ONC will use the
terminology in the 2015 Edition final rule, the ONC Enhanced Oversight
and Accountability final rule, or other relevant ONC rulemakings and
regulations, if applicable. In particular, a request for cooperation
made by an ONC-ACB to a health care provider will not refer to ``SPPC
activities.'' Rather, the request will typically refer to the ONC-ACB's
need to carry out ``surveillance'' of the certified health IT used by
the health care provider. Similarly, if ONC requests the cooperation of
a health care provider in connection with ONC's direct review of
certified health IT, as described in the ONC Enhanced Oversight and
Accountability final rule scheduled for publication in the Federal
Register on October 19, 2016, ONC will not use the terminology ``SPPC
activities.'' Rather, ONC will request the cooperation of the health
care provider with ONC's ``direct review'' or ``review'' of the
certified health IT. In addition, throughout this final rule with
comment period, we use the term ``health IT vendor'' to refer to third
party entities supporting providers with technology requirements for
the Quality Payment Program. In this section, we instead use the term
``health IT developer'' to distinguish between these third parties and
those developers of a health IT product under the ONC rules. In order
to maintain consistency with the ONC rules, we use the term ``health IT
developer'' for those that have presented a health IT product to ONC
for certification.
We received public comment on the proposals and our response
follows.
Comment: Several commenters expressed concern that the proposed
attestation would be unduly burdensome for health care providers. A
number of commenters stated that requiring health care providers to
engage in SPPC activities related to their certified EHR technology
would place a disproportionate burden on providers relative to other
stakeholders who share the responsibility of advancing the use of
health IT and the exchange of electronic health information. More
specifically, several commenters stated that SPPC activities related to
a provider's certified EHR technology could disrupt health care
operations. According to one commenter, this disruption may be
especially burdensome for small practices who may need to engage a
third party to assist them in cooperating in good faith to a request to
assist ONC or an ONC-ACB, such as evaluating the performance of
certified EHR technology capabilities in the field. Another commenter
requested clarification on how evaluations of certified EHR technology
would be conducted in production environments without disturbing
patient encounters and clinical workflows.
Commenters offered a number of suggestions to reduce the potential
burden of this proposal on health care providers. First, some
commenters strongly endorsed the safeguards established by ONC--
including methods used to select locations, such as sampling and
weighting considerations and the exclusion of certain locations in
appropriate circumstances. In addition, one commenter recommended that,
where ONC-ACB surveillance or ONC direct review involves evaluating
certified EHR technology in the field, the ONC-ACB surveillance or ONC
direct review should be scheduled 30 days in advance and at a time that
is convenient to accommodate the health care providers' schedules, such
as after hours or on weekends. The commenter suggested that this would
avoid disruption both to administrative operations and patient care.
Response: We understand that, if a request to assist ONC or an ONC-
ACB is received, cooperating in good faith may require providers to
prioritize limited time and other resources--especially for in-the-
field evaluations of certified EHR technology. As we explained in the
proposed rule, we believe that several safeguards established by ONC
will minimize the burden of these activities (81 FR 28171). We note
that under the 2015 Edition final rule, randomized surveillance is
limited annually to 2 percent of unique certified health IT products
(80 FR 62714). To illustrate the potential impact of these activities,
for CY 2016 ONC estimates that up to approximately 24 products would be
selected by each of its three ONC-ACBs, for a maximum of 72 total
products selected across all ONC-ACBs (80 FR 62714). While ONC-ACB
surveillance may be carried out at one or more locations for each
product selected, we believe the likelihood that a health care provider
will be asked to participate in the ONC-ACB surveillance of that
product will in many cases be quite small due to the
[[Page 77022]]
number of other health care providers using the health IT product.
Further, the 2015 Edition final rule states that ONC-ACBs may use
appropriate sampling methodologies to minimize disruption to any
individual or class of health care providers and to maximize the value
and impact of randomized surveillance for all health care providers and
stakeholders (80 FR 62715). In addition, we reiterate that if an ONC-
ACB is unable to complete its randomized surveillance of certified EHR
technology at a particular location--such as where, despite a good
faith effort, the health care provider at a chosen location is unable
to provide the requisite cooperation--the ONC-ACB may exclude the
location and substitute a different location for observation (see ONC
2015 Edition final rule 80 FR 62716). ONC has also explained that in
many cases in-the-field evaluations of certified EHR technology may be
accomplished through an in-person site visit or may instead be
accomplished remotely (80 FR 62708). Thus, in general, we expect that
health care providers will be presented with a choice of evaluation
approaches and be able to choose one that is convenient for their
practice.
We also understand the concerns expressed by some commenters that
engaging in SPPC activities should not unreasonably disrupt the
workflow or operations of a health care provider. In consultation with
ONC, we expect that in most cases ONC and ONC-ACBs will accommodate
providers' schedules and other circumstances, and that in most cases
providers will be given ample notice of and time to respond to requests
from ONC and ONC-ACBs. We note that in some cases it may be necessary
to secure a health care provider's cooperation relatively quickly, such
as if a potential problem or issue with certified EHR technology poses
potentially serious risks to public health or safety (see the ONC
Enhanced Oversight and Accountability final rule scheduled for
publication in the Federal Register on October 19, 2016).
Finally, through public comment on the proposed rule, we note that
in addition to these specific concerns expressed and addressed
regarding SPPC activities, stakeholders share a general concern over
the risks and potential negative impact of transitioning to MIPS and
upgrading certified health IT in a short time without adequate
preparation and support. Stakeholders are particularly concerned about
this impact on solo practitioners, small practices, and health care
providers with limited resources that may be providing vital access to
health care in under-served communities. As noted previously, we
believe the safeguards and policies established for ONC-ACBs'
activities, discussed above, mitigate the risk of disruption to health
care providers under normal circumstances. However, consistent with our
overall approach for implementing new programs and requirements such as
the Quality Payment Program and historically under the EHR Incentive
Programs, we are modifying our final policy from the proposal to allow
for additional flexibility for health care providers.
Our proposed policy would require health care providers to attest
that they cooperated in good faith with ONC-ACB surveillance and ONC's
direct review of certified health IT in order to demonstrate they have
used certified EHR technology in a meaningful manner. In this final
rule with comment period, we are finalizing a modified approach that
splits the SPPC activities into two parts and draws a distinction
between cooperation with ONC direct review and cooperation with ONC-ACB
surveillance requests.
We are finalizing as proposed the requirement to cooperate in good
faith with a request relating to ONC direct review of certified health
IT. We do not believe it is appropriate to modify this requirement
because ONC direct review is designed to mitigate potentially serious
risk to public health and safety and to address practical challenges in
reviewing certified health IT by an ONC-ACB. However, we are finalizing
a modification to the requirement to cooperate with a request relating
to ONC-ACB surveillance, which is different from ONC direct review (see
discussion above). The modification to ONC-ACB surveillance will allow
providers to choose whether to participate in SPPC activities
supporting ONC-ACB surveillance of certified EHR technology.
As described in this section, ONC direct review focuses on
situations involving (1) public health and safety and (2) practical
challenges for ONC-ACBs, such as when a situation exceeds an ONC-ACB's
resources or expertise. We maintain that cooperation in ONC direct
review, when applicable, is important to demonstrating that a health
care provider used certified EHR technology in a meaningful manner as
required by sections 1848(o)(2)(A)(i) and 1886(n)(3)(A)(i) of the Act
as stated in the proposed rule (81 FR 28170 through 28171).
We are therefore finalizing a two part attestation that splits the
SPPC activities. As it relates to ONC direct review, the attestation is
required. As it relates to ONC-ACB surveillance, the attestation is
optional. The attestations are as follows:
Health care providers must attest that they engaged in
good faith in SPPC activities related to ONC direct review by: (1)
Attesting their acknowledgment of the requirement to cooperate in good
faith with ONC direct review of their health information technology
certified under the ONC Health IT Certification Program if a request to
assist in ONC direct review is received; and (2) if a request is
received, attesting that they cooperated in good faith in ONC direct
review of health IT under the ONC Health IT Certification Program to
the extent that such technology meets (or can be used to meet) the
definition of certified EHR technology.
Optionally, health care providers may attest that they
engaged in good faith in SPPC activities related to ONC-ACB
surveillance by: (1) Attesting their acknowledgement of the option to
cooperate in good faith with ONC-ACB surveillance of their health
information technology certified under the ONC Health IT Certification
Program if a request to assist in ONC-ACB surveillance is received; and
(2) if a request is received, attesting that they cooperated in good
faith in ONC-ACB surveillance of health IT under the ONC Health IT
Certification Program, to the extent that such technology meets (or can
be used to meet) the definition of certified EHR technology.
As noted previously, only a small percentage of providers are
likely to receive a request for assistance from ONC or an ONC-ACB in a
given year. Therefore under this final policy, for both the mandatory
attestation and for the optional attestation, a health care provider is
considered to be engaging in SPPC activities related to supporting
providers with the performance of certified EHR technology first by an
attestation of acknowledgment of the policy and second by an
attestation of cooperation in good faith if a request to assist was
received from ONC or an ONC-ACB. However, we reiterate that the
attestation requirement as it pertains to cooperation with ONC-ACB
surveillance is optional for health care providers.
Operationally, we expect that the submission method selected by the
health care provider will influence how these attestations are
accomplished (see section II.E.5.a on MIPS submission mechanisms for
details or the 2015 EHR Incentive Programs final rule (80 FR 62896-
62901). For example, a Medicaid EP attesting to their state for the EHR
Incentive Programs may be provided a series of statements within the
[[Page 77023]]
attestations system. In this case the attestation would be offered in
two parts. For the first part, in order to successfully demonstrate
meaningful use, the EP must attest that they engaged in SPPC activities
related to ONC direct review of certified EHR technology, first by
their acknowledgement of the policy, and second by attesting that they
cooperated in good faith with ONC direct review of the certified EHR
technology if a request to assist was received. For the second part in
this example, the Medicaid EP may choose to attest that they engaged in
SPPC activities related to ONC-ACB surveillance of certified EHR
technology, including attesting to having cooperated in good faith if a
request to assist was received, or the EP may choose not to so attest.
A health care provider electronically submitting data for MIPS
would be required to use the form and manner specified for the
submission mechanism to indicate their attestation to the first part,
and may indicate their attestation to the second part if they so
choose. CMS and ONC will also offer continued support and guidance both
through educational resources to support participating in and reporting
to CMS programs, and through specific guidance for those health care
providers who receive requests related to engaging in SPPC activities.
Comment: Several commenters opposed any in-the-field observation of
a health care provider's certified EHR technology and insisted that
such observations be conducted with the developer of the certified EHR
technology instead. Some commenters questioned the need to perform
observations of certified EHR technology in production environments,
observing that health care providers and other users of certified EHR
technology often depend on the developer of the certified EHR
technology to deliver required functionality and capabilities. One
commenter recommended that the observation of certified EHR technology
be limited to the use of test systems and test data rather than
observation of production systems and data.
Several commenters stated that health care providers should not be
required to cooperate with on-premises observation of their certified
EHR technology because an ONC-ACB should be able to access and evaluate
the performance of certified health IT capabilities using remote access
methods. By contrast, other commenters stated that remote observation
could create security risks and that all observations should be
conducted on the premises, preferably under the direction of the health
care provider's clinical staff.
Response: To provide adequate assurance that certified EHR
technology meets applicable certification requirements and provides the
capabilities health care providers need, it is critical to determine
not only how certified EHR technology performs in a controlled testing
environment but also how it performs in the field. Indeed, a
fundamental purpose of ONC-ACB surveillance and ONC direct review is to
allow ONC-ACBs and ONC to identify problems or deficiencies in
certified EHR technology that may only become apparent once the
technology has been implemented and is in use by health care providers
in production environments (80 FR 62709). These activities necessarily
require the cooperation of the clinicians and other persons who
actually use the capabilities of certified EHR technology implemented
in production environments, including health care providers. (See 81 FR
28170-71). This cooperation ultimately benefits health care providers
and is critical to provider success in the Medicare and Medicaid EHR
Incentive Programs and MIPS because it provides confidence that
certified EHR technology capabilities will function as expected and
that health care providers will be able to demonstrate compliance with
CMS program requirements.
We decline to limit health care providers' engagement in SPPC
activities to any particular form of observation, such as on-premises
or remote observation of certified capabilities. We note that in the
2015 Edition final rule, ONC explained the observation of certified
health IT capabilities in a production environment may require a
variety of methodologies and approaches (80 FR 62709). In addition, as
the comments suggest, individual health care providers are likely to
have different preferences and should have the flexibility to work with
an ONC-ACB or ONC to identify an approach to these activities that is
most effective and convenient. In this connection, we have consulted
with ONC and expect that, where feasible, a health care provider's
preference for a particular form of observation will be accommodated.
For similar reasons, we decline to limit engagement in SPPC
activities to the use of test systems or test data. The use of test
systems and test data may be allowed in some circumstances, but may not
be appropriate in all circumstances. For example, a problem with
certified EHR technology capabilities may be difficult or impossible to
replicate with test systems or test data. More fundamentally, limiting
cooperation to observations of test systems and test data may not
provide the same degree of assurance that certified EHR technology used
by health care providers (for example, production systems used with
production data) continue to meet applicable certification requirements
and function in a manner that supports health care providers
participation in the EHR Incentive Programs and MIPS.
Comment: One commenter suggested that health care providers who
engage in SPPC activities be able to file a formal complaint with ONC
or CMS in the event that the ONC-ACB were to ``handle matters
inappropriately,'' and that the ONC-ACB should not be permitted to
continue its activities until the complaint has been resolved.
Response: If a provider has any concerns about the propriety of an
ONC-ACB's conduct, including in connection with a request to assist in
ONC-ACB surveillance of certified health IT or during in-the-field
surveillance of the certified EHR technology, the health care provider
should make a formal complaint to ONC detailing the conduct in
question. For further information, we direct readers to ONC's Web site:
https://www.healthit.gov/healthitcomplaints.
Comment: A number of commenters were opposed to or raised concerns
regarding this proposal on the grounds that requiring health care
providers to engage in SPPC activities would violate the HIPAA Rules.
Relatedly, a number of commenters stated that requiring providers to
give ONC or ONC-ACBs access to their production systems may be
inconsistent with a health care organization's privacy or security
policies and could introduce security risks. A few commenters stated
that observation of certified EHR technology in the field would violate
patients' or providers' privacy rights or expectations. Some of these
commenters expressed the view that any requirement to engage in SPPC
activities would be an unjustified governmental invasion of privacy or
other interests.
Response: As noted in the Quality Payment Program proposed rule and
in the 2015 Edition final rule, in consultation with the Office for
Civil Rights, ONC has clarified that as a result of ONC's health
oversight authority a health care provider is permitted, without
patient authorization, to disclose PHI to an ONC-ACB or directly to ONC
for purposes of engaging in SPPC activities in cooperation with a
request to assist from ONC or an ONC-ACB (81 FR 28171; 80 FR 62716).
Health
[[Page 77024]]
care providers are permitted without patient authorization to make
disclosures to a health oversight authority (as defined in 45 CFR
164.501) for oversight activities authorized by law (as described in 45
CFR 164.512(d)), including activities to determine compliance with
program standards, and ONC may delegate its authority to ONC-ACBs to
perform surveillance of certified health IT under the Program.\3\ This
disclosure of PHI to an ONC-ACB does not require a business associate
agreement with the ONC-ACB since the ONC-ACB is not performing a
function on behalf of the covered entity. In the same way, a provider,
health IT developer, or other person or entity is permitted to disclose
PHI directly to ONC, without patient authorization and without a
business associate agreement, for purposes of ONC's direct review of
certified health IT or the performance of any other oversight
responsibilities of ONC to determine compliance under the Program.
---------------------------------------------------------------------------
\3\ See, 45 CFR 164.512(d)(1)(iii); 80 FR 62716 and ONC
Regulation FAQ #45 [12-13-045-1]. Available at http://www.healthit.gov/policy-researchers-implementers/45-question-12-13-045.
---------------------------------------------------------------------------
We disagree with commenters who maintained that the disclosure of
PHI to ONC or an ONC-ACB could be inconsistent with reasonable privacy
or other organizational policies or would otherwise be an unjustified
invasion of privacy or any other interest. As noted, the disclosure of
this information would be authorized by law on the basis that it is a
disclosure to a health oversight agency (ONC) for the purpose of
determining compliance with a federal program (the ONC Health IT
Certification Program). In addition, we note that any further
disclosure of PHI by an ONC-ACB or ONC would be limited to disclosures
authorized by law, such as under the federal Privacy Act of 1974, or
the Freedom of Information Act (FOIA), as applicable.
Comment: Several commenters requested clarification concerning the
types of production data that ONC or an ONC-ACB would be permitted to
access (and that a health care provider would make accessible to ONC,
or the ONC-ACB) when assessing certified EHR technology in a production
environment. Several commenters recommended that production data be
limited to the certified capabilities and not extend to other aspects
of the health IT.
Response: A request to assist in ONC-ACB surveillance or ONC direct
review may include in-the-field surveillance or direct review of the
certified EHR technology to determine whether the capabilities of the
health IT are functioning in accordance with the requirements of the
ONC Health IT Certification Program. We note that it is common for
certified EHR technology to be deployed and integrated with other
technologies (including technologies that produce data used across
multiple systems and components). Therefore, we believe it is feasible
that determining whether certified EHR technology is operating as it
should could mean, for example, ONC reviewing whether the certified EHR
technology does not operate as it should when it interacts with other
technologies. We also refer commenters to the 2015 Edition final rule
and the ONC Enhanced Oversight and Accountability final rule for more
information about the scope of ONC-ACB surveillance and ONC direct
review, and for a discussion about the types of capabilities that may
be subject to ONC-ACB surveillance and ONC direct review.
Comment: A commenter observed that while the proposed attestation
would be retrospective, health care providers may be unaware of the
requirement to engage in SPPC activities until they are presented with
the attestation statement. The commenter suggested that health care
providers be required to attest only that they will prospectively
engage in SPPC activities.
Response: The attestation is retrospective because it is part of
health care provider's demonstration that it has used certified EHR
technology in a meaningful manner for a certain period. Based on our
consultation with ONC, the health care providers will be made aware of
both their obligation to cooperate if they are contacted to assist in
ONC direct review of certified health IT and their option to cooperate
if they are contacted to assist an ONC-ACB in surveillance of certified
health IT. Thus, we believe that health care providers will be able to
appropriately engage in SPPC activities for CMS programs and attest to
their cooperation.
Comment: A commenter urged that health care providers be held
harmless if engagement in SPPC activities results in a finding that
their certified EHR technology no longer conforms to the requirements
of the ONC Health IT Certification Program due to the actions of the
certified EHR technology developer.
Response: ONB-ACB surveillance and ONC direct review provide an
opportunity to assess the performance of certified EHR technology
capabilities in a production environment to determine whether the
technology continues to perform in accordance with the requirements of
the ONC Health IT Certification Program. This analysis will necessarily
be focused on the performance of the technology, which may require the
consideration of a provider's use of the technology. However, health
care providers that cooperate with the analysis of the performance of
certified EHR technology are not themselves subject to ONC or an ONC-
ACB's authority under, as applicable, the surveillance requirements of
the 2015 Edition final rule, or the direct review requirements of the
ONC Enhanced Oversight and Accountability final rule. As such, no
adverse finding or determination can be made by ONC or an ONC-ACB
against a provider in connection with ONC direct review or ONC-ACB
surveillance. If ONC or an ONC-ACB determined that the performance
issue being analyzed arose solely from the provider's use of the
technology and not from a problem with the technology itself, ONC or an
ONC-ACB would not make a nonconformity finding against the health IT,
but may decide to notify the provider of its determination for
information purposes only. We do acknowledge, however, that if in the
course of ONC-ACB surveillance or ONC direct review, ONC became aware
of a violation of law or other requirements, ONC could share that
information with relevant federal or state entities. If a certified
health IT product is determined to no longer conform with the
requirements of the ONC Health IT Certification Program and the health
IT's certification were to be terminated by ONC or withdrawn by an ONC-
ACB, there exists a process by which an affected health care provider
may apply for exception from payment adjustments related to CMS
programs on the basis of significant hardship or exclusion from the
requirement. For example, we direct readers to CMS FAQ# 12657 \4\
related to hardship exceptions for the EHR Incentive Programs related
to the certification of a health IT product being terminated or
withdrawn.
---------------------------------------------------------------------------
\4\ CMS FAQ#12657 ``What if your product is decertified?'':
https://questions.cms.gov/faq.php?isDept=0&search=decertified&searchType=keyword&submitSearch=1&id=5005.
---------------------------------------------------------------------------
Comment: Multiple commenters suggested that, in lieu of the
proposed attestation, we provide incentives to encourage voluntary
participation in SPPC activities, such as counting voluntary
participation towards an eligible clinician's performance score for the
advancing care information category of MIPS.
Response: We have considered the commenters' suggestion but
conclude
[[Page 77025]]
that it would be impracticable for two main reasons. First, a key
component of the oversight of certified EHR technology is the
randomized surveillance of certified EHR technology by ONC-ACBs. To
ensure a representative sample, we believe it is important that all
health care providers are required to use certified EHR technology as
an EP, eligible hospital, or CAH under the Medicare and Medicaid EHR
Incentive Programs and as a MIPS eligible clinicians under the
advancing care information performance category be part of the pool
from which ONC-ACBs select locations for in-the-field surveillance, not
only those who volunteer for participation. Second, as we explained in
connection with commenters' concerns regarding the potential impact of
SPPC activities on providers, we anticipate that the opportunity for
health care providers to participate in randomized surveillance of
their certified EHR technology will arise relatively infrequently due
to the relatively small number of practices and other locations that
would be selected for this type of ONC-ACB surveillance. This means
that only a limited number of health care providers would have an
opportunity to participate in this way for reasons outside the control
of the health care provider. Consequently, health care providers would
not have an equal opportunity to participate in these activities, which
would make adopting an incentive within the scoring methodology for
these activities potentially unfair to providers who are participating
in CMS programs but are not selected by the randomized selection
process. This would unfairly skew scores in a manner unrelated to a
health care provider's performance in a given program. For these
reasons we decline to adopt such an arrangement.
Comment: Multiple commenters stated that this proposal was
premature because ONC has yet to finalize the ONC Health IT
Certification Program: Enhanced Oversight and Accountability proposed
rule. Commenters urged us to withdraw the proposal until such time as
any changes to the ONC Health IT Certification Program have been
finalized.
Response: We recognize that the pendency of the ONC Health IT
Certification Program: Enhanced Oversight and Accountability proposed
rule, which outlines the policies for ONC direct review of certified
health IT, at the time of our proposal may have been challenging for
some commenters. However, health care provider engagement in SPPC
activities is important regardless of whether a request to assist
relates to ONC direct review of certified health IT or ONC-ACB
surveillance of certified health IT. As we have explained, we expect
health care providers will engage in SPPC activities because doing so
is fundamental to ensuring that certified EHR technology performs in a
manner that supports the goals of health care providers seeking to meet
the requirements of the MIPS and Medicare and Medicaid EHR Incentive
Programs. We further believe that the publication of the ONC Enhanced
Oversight and Accountability final rule in concert with the
flexibilities finalized in this final rule with comment period, as well
as the timeline for implementation of these policies, which apply to
reporting periods beginning in CY 2017, supports resolution of this
concern.
Comment: A commenter stated that the proposed attestation would
compel meaningful EHR users to cooperate with far-ranging or unbounded
inquiries into their certified health IT. Other commenters expressed
similar concerns and pointed to what they perceived as the broad range
of issues that could be subject to ONC's direct review under the ONC
Health IT Certification Program: Enhanced Oversight and Accountability
proposed rule.
Response: We reiterate that, whatever form engagement in SPPC
activities may take, any conclusions by ONC or ONC-ACBs will
necessarily be focused on the performance of the technology. Moreover,
as we have explained, health care providers will only be required to
attest their engagement in SPPC activities in relation to requests
received to assist in ONC direct review of certified capabilities of
their health IT that meet (or can be used to meet) the definition of
certified EHR technology. Further, because a health care provider's
attestation will be retrospective as noted previously, the attestation
relates only to acknowledgment if no request was received or the health
care provider's cooperation with requests for assistance that have
already been received at the time of making the attestation. The
attestation requirement does not require that health care providers
commit to engaging in unknown future activities.
Comment: A commenter requested more information about the
circumstances that would trigger direct review of certified EHR
technology. Separately, the commenter recommended that such review be
conducted only as part of an audit of a health care provider's
demonstration of meaningful use or an eligible clinician's reporting
for the advancing care information performance category.
Response: ONC determines the requirements for and circumstances
under which health IT may be subject to ONC-ACB surveillance or ONC
direct review under the ONC Health IT Certification Program. We refer
the commenter to the 2015 Edition final rule (80 FR 62601) for a
discussion of existing requirements related to the observation of
certified health IT by ONC-ACBs and to the ONC Enhanced Oversight and
Accountability final rule (scheduled for publication in the Federal
Register on October 19, 2016) for a discussion of ONC's direct review
activities. To, be effective, ONC-ACB surveillance or ONC direct review
of SPPC activities must be timely to identify an issue with the
certified health IT. If these actions are limited to the timing of
retrospective audits of a health care provider's compliance with
program requirements, they may not reflect the current implementation
of the technology in a production setting where the issue exists. For
these reasons, it is not appropriate for a health care provider's
cooperation to be limited to the context of a program audit on prior
participation.
Comment: To assist health care providers in complying with the
proposed attestation, a commenter recommended that any requests for
engagement in SPPC activities be clearly labeled as such so as to
differentiate them from other types of communications.
Response: We acknowledge this commenter's concern that, to support
health care providers engaging in SPPC activities, a request to assist
should be designed to clearly inform the recipient as to the purpose of
the communication and avoid, as much as possible, the request being
inadvertently overlooked or unnoticed. We have consulted with ONC and
clarify that ONC-ACBs currently initiate contact with health care
providers for randomized surveillance by emailing the person or office
holder of a practice or organization that is the primary contact for
the health IT developer whose product is being surveilled or reviewed.
The contact information is supplied by the developer, and ONC-ACBs
would not ordinarily contact a health care provider directly unless
they are identified by the developer as being the most appropriate
point of contact for a practice location. However, we note that in
addition to clarity on the point of contact, clarity within the request
itself is essential for the health care provider engaging in SPPC
activities. This relates not only to clarity as to the purpose of the
request, but also in relation to the mandatory and optional SPPC
activities which are differentiated based on if the request is for ONC
direct review of
[[Page 77026]]
certified health IT or ONC-ACB surveillance of certified health IT.
As program guidance is developed, CMS and ONC will work to ensure
that requests from ONC and ONC-ACBs provide clear context and guidance
for health care providers when requesting that health care providers
engage in SPPC activities as part of their participation in CMS
programs.
Comment: A commenter stated that some EHR contracts specifically
prohibit customers or users of certified EHR technology from providing
ONC or ONC-ACBs with access to the technology or data.
Response: Developers of certified health IT are required to
cooperate with ONC program activities such as ONC direct review or ONC-
ACB surveillance of certified health IT, which includes furnishing
information to ONC or an ONC-ACB that is necessary to the performance
of these activities (see 80 FR 62716-18) in order to obtain and
maintain certification of health IT. Access to certified health IT that
is under observation by ONC or an ONC-ACB, together with production
data relevant to the certified capability or capabilities being
assessed, is essential to this process. For example, in the 2015
Edition final rule, ONC stated that a health IT developer must furnish
to the ONC-ACB upon request, accurate and complete customer lists, user
lists, and other information that the ONC-ACB determines is necessary
to enable it to carry out its surveillance responsibilities (80 FR
62716). If a health care provider reasonably believes that it is unable
to engage in SPPC activities due to these or other actions of its
health IT developer, the health care provider should notify ONC or the
ONC-ACB, as applicable. If the developer has indeed limited,
discouraged, or prevented the health care provider from cooperation in
good faith with a request to assist ONC direct review, the health care
provider would not be required to cooperate with such activities unless
and until the developer removed the contractual restrictions or other
impediments.
Comment: A commenter expressed concern about sharing data with ONC
or an ONC-ACB without a clear description of the data to be accessed.
Response: The nature of the data that will need to be accessed by
ONC or an ONC-ACB will be made clear to the health care provider at the
time that their cooperation is sought. To alleviate any concerns
commenters may have, we will work with ONC to provide guidance to ONC-
ACBs and to providers, as necessary, to address issues such as the
communication protocols to be used when requesting a health care
provider's engagement in SPPC activities.
Comment: Several commenters requested additional guidance on
specific actions health care providers would be expected to take to
engage in SPPC activities and cooperate in good faith with a request to
assist if so requested. One commenter recommended that CMS and ONC
create a check-list tool that clinicians could use to track their
compliance with the required activities.
Response: As specified in the proposed rule, engaging in SPPC
activities and cooperation in good faith may simply require the
provision of information, such as in response to telephone inquiries
and written surveys, about the performance of the certified EHR
technology being used. Engagement in SPPC activities and cooperation in
good faith might also involve facilitating requests (from ONC or ONC-
ACBs) for access to the certified EHR technology (and related data) as
deployed in the provider's production environment and to demonstrate
capabilities and other aspects of the technology that are the focus of
the ONC-ACB surveillance or ONC direct review.
Because assistance with ONC-ACB surveillance or ONC direct review
will typically be carried out at a practice or facility level, we
expect that it will be rare for a health care provider to be directly
involved in the conduct of many of these activities, including in-the-
field observations of certified EHR technology capabilities. To comply
with the attestation requirements, a health care provider should
establish to their own satisfaction that appropriate processes and
policies are in place in their practice to ensure that all relevant
personnel, such as a practice manager or IT officer, are aware of the
health care provider's obligation to engage in SPPC activities related
to requests to assist in ONC direct review of certified health IT and
the health care provider's option to engage in SPPC activities related
to requests to assist in ONC-ACB surveillance of certified health IT.
This includes understanding the requirement to cooperate in good faith
with a request to assist in ONC direct review if received. Health care
providers should also ensure that appropriate processes and policies
are in place for the practice to document all requests and
communications concerning SPPC activities as they would for other
requirements of CMS programs in which they participate. We note that
for a health care provider participating in a CMS program as an
individual, if that health care provider practices at multiple
locations or switches locations throughout the course of a year, they
would only need to make inquiries about any requests to assist in ONC
direct review of certified health IT during the period in which the
eligible clinician or EP worked at the practice.
We acknowledge the commenter's desire for a checklist tool to
provide greater certainty for clinicians. However, as ONC explained in
the 2015 Edition final rule, an evaluation of certified health IT in a
production environment may require a variety of methodologies and
approaches (80 FR 62709) and individual health care providers are able
to express different preferences and should have the flexibility to
work with ONC or an ONC-ACB to identify an effective approach that is
most convenient. Because the specific actions required will be
addressed on a case-by-case basis, the development of a checklist tool
may not be feasible. Rather, as noted previously, if any request is
made, ONC or an ONC-ACB will work directly with the health care
provider to provide clear guidance on the actions needed to assist in
the request. The health care provider would then retain any such
documentation concerning the request for their records as they would
for other similar requirements in CMS programs.
Comment: A commenter asked how ONC-ACBs will identify themselves
and how a health care provider will be able to verify that it is not
dealing with an imposter.
Response: Each health IT developer contracts with one or more ONC-
ACBs to provide certification services. As such, health IT developers
should be familiar with the processes used by their ONC-ACB(s) and have
existing practices for communicating with the personnel of their ONC-
ACB(s). A health care provider can, on receipt of a request to assist
an ONC-ACB, contact their health IT developer and request information
about the identity of the ONC-ACB personnel that will carry out the
activities. Health care providers should, before providing access to
their facility or the certified health IT, request that the ONC-ACB
personnel provide appropriate identification that matches the
information about the ONC-ACB provided by the provider's certified
health IT developer.
Comment: Several commenters requested that we elaborate on the
requirements for engaging in SPPC activities ``in good faith'' and for
permitting timely access to certified EHR technology.
Response: Health care providers are required to attest to engaging
in SPPC
[[Page 77027]]
activities which requires that they cooperate in good faith and in a
timely manner with a request to assist in ONC direct review of
certified health IT if such a request is received. A health care
provider may also optionally attest to engaging in SPPC activities,
including having cooperated in good faith, in response to a request to
assist an ONC-ACB with surveillance of certified health IT. This
includes cooperating in a manner that aids and assists ONC or an ONC-
ACB to perform ONC direct review or ONC-ACB surveillance activities to
the extent that such cooperation is practicable and not unduly
burdensome to the provider. As previously mentioned, the particular
needs of any request for assistance from ONC or an ONC-ACB may vary
depending on a wide range of factors. In addition, ``in good faith'' is
necessarily dependent upon the particular facts and circumstances of
the health care provider who attests. For example, a request for
assistance may relate to a capability the health care provider does not
have enabled in their EHR as it is not needed for their unique
practice, which might be costly, time consuming, or otherwise
unreasonable for the provider to enable solely for the purposes of ONC
direct review of that function. In such a case, the health care
provider who communicates these limitations to ONC, and maintains
documentations of the request and these circumstances related to their
practice, may be found to have cooperated in good faith based on this
documentation. However, if the health care provider received such a
request and provided no response to the request and did not retain
documentation of these circumstances, they may be found not to have
cooperated in good faith.
Comment: One commenter asked us to clarify that a health care
provider will have satisfied the requirements of the proposed
attestation in the event that the health care provider was never
approached by ONC or an ONC-ACB with a request for assistance during
the relevant reporting period.
Response: In the circumstances the commenter describes, the health
care provider would be able to attest to both the mandatory attestation
(related to ONC direct review) and the optional attestation (related to
ONC-ACB surveillance) on the basis that they acknowledge the policy. In
other words, for the mandatory attestation, the health care provider
that receives no request related to ONC direct review could
successfully meet the attestation requirement by attesting that they
acknowledge the requirement to cooperate in good faith with all
requests for assistance with ONC direct review of their certified EHR
technology. Likewise, a health care provider that did not receive a
request for assistance with ONC-ACB surveillance during the reporting
year but still seeks to attest to the optional attestation would attest
that they are aware of the option to cooperate in good faith with all
requests for assistance in ONC-ACB surveillance. We have revised the
regulation text provisions at Sec. Sec. 495.4, 495.40(a)(2)(i)(H),
495.40(b)(2)(i)(H), and 414.1375(b)(3)(i) to state that a health care
provider engages in SPPC activities by cooperating in good faith with
the ONC-ACB surveillance or ONC direct review of its certified EHR
technology, to the extent that the health care provider receives a
request from an ONC-ACB or ONC during the relevant reporting period;
and that in the absence of any requests being made during the reporting
period, the health care provider would demonstrate their engagement in
the SPPC activities simply by attesting that they are aware of the SPPC
policy.
Comment: Several commenters requested clarification regarding the
documentation that would be required to demonstrate compliance with the
terms of the attestation so that health care providers could plan and
prepare for an audit of this requirement. Among other topics,
commenters requested guidance on expected documentation requirements
related to a health care provider's responsiveness to requests for
engagement in SPPC activities and the extent of cooperation required.
Response: We acknowledge commenters' concerns about required
documentation in cases of an audit. We clarify that we will provide
guidance to auditors relating to this final rule with comment period
and the attestation process in a similar manner as guidance is provided
for other requirements under current CMS programs. This instruction
includes requiring auditors to work closely with health care providers
on identifying the appropriate supporting documentation applicable to
the health care provider's individual case. We further stress that
audit determinations are made on a case by case basis, which allows us
to give individual consideration to each health care provider. We
believe that such case-by-case review will allow us to adequately
account for the varied circumstances that may be relevant.
Comment: Commenters requested clarification concerning the
effective date of the attestation requirement and, more specifically,
the period to which an attestation that a health care provider engaged
in SPPC activities would apply. Several commenters expressed concerns
related to the timing of the attestation, noting that health care
providers may submit attestations for reporting periods that have
already begun or that will have begun prior to the effective date of
this final rule with comment period.
Response: We understand the commenters' concerns and are finalizing
the requirement to attest to engagement in SPPC activities for health
care providers for MIPS performance periods or EHR reporting periods
beginning on or after January 1, 2017. The requirement includes only
requests to engage in SPPC activities received after the effective date
of this final rule with comment period. In other words, if a health
care provider receives a request from ONC or an ONC-ACB to engage in
SPPC activities before the effective date of this final rule with
comment period, the attestation requirement will not apply to that
request, and the health care provider is not required to cooperate with
the request.
After review and consideration of public comment, we are finalizing
revisions to the definition of a meaningful EHR user at Sec. Sec.
495.4 and 414.1305 to include ``engaging in activities related to
supporting providers with the performance of certified EHR
technology.''
We are finalizing modifications to the attestation requirements at
Sec. 495.40(a)(2)(i)(H) and (b)(2)(i)(H) to require an EP, eligible
hospital or CAH to attest that they engaged in SPPC activities by
attesting that they: (1) Acknowledge the requirement to cooperate in
good faith with ONC direct review of their health information
technology certified under the ONC Health IT Certification Program if a
request to assist in ONC direct review is received; and (2) if
requested, cooperated in good faith with ONC direct review of their
health information technology certified under the ONC Health IT
Certification Program, as authorized by 45 CFR part 170, subpart E, to
the extent that such technology meets (or can be used to meet) the
definition of CEHRT, including by permitting timely access to such
technology and demonstrating its capabilities as implemented and used
by the EP, eligible hospital, or CAH in the field.
Additionally, we are finalizing that, optionally, the EP, eligible
hospital, or CAH may also attest that they engaged in SPPC activities
by attesting that they: (1) Acknowledge the option to cooperate in good
faith with ONC-ACB surveillance of their health information technology
certified under the ONC Health IT Certification Program if a
[[Page 77028]]
request to assist in ONC-ACB surveillance is received; and (2) if
requested, cooperated in good faith with ONC-ACB surveillance of their
health information technology certified under the ONC Health IT
Certification Program, as authorized by 45 CFR part 170, subpart E, to
the extent that such technology meets (or can be used to meet) the
definition of CEHRT, including by permitting timely access to such
technology and demonstrating its capabilities as implemented and used
by the EP, eligible hospital, or CAH in the field.
We are also finalizing at Sec. 404.1375(3) that the same
attestations be made by all eligible clinicians under the advancing
care information performance category of MIPS, including eligible
clinicians who report on the advancing care information performance
category as part of an APM Entity group under the APM scoring standard,
as discussed in section II.E.5.h. of this final rule with comment
period (see 81 FR 28170-71).
b. Support for Health Information Exchange and the Prevention of
Information Blocking
To prevent actions that block the exchange of information, section
106(b)(2)(A) of the MACRA amended section 1848(o)(2)(A)(ii) of the Act
to require that, to be a meaningful EHR user, an EP must demonstrate
that he or she has not knowingly and willfully taken action (such as to
disable functionality) to limit or restrict the compatibility or
interoperability of certified EHR technology. Section 106(b)(2)(B) of
MACRA made corresponding amendments to section 1886(n)(3)(A)(ii) of the
Act for eligible hospitals and, by extension, under section 1814(l)(3)
of the Act for CAHs. Sections 106(b)(2)(A) and (B) of the MACRA provide
that the manner of this demonstration is to be through a process
specified by the Secretary, such as the use of an attestation. Section
106(b)(2)(C) of the MACRA states that the demonstration requirements in
these amendments shall apply to meaningful EHR users as of the date
that is 1 year after the date of enactment, which would be April 16,
2016.
As legislative background, on December 16, 2014, in an explanatory
statement accompanying the Consolidated and Further Continuing
Appropriations Act,\5\ the Congress advised ONC to take steps to
``decertify products that proactively block the sharing of information
because those practices frustrate congressional intent, devalue
taxpayer investments in certified EHR technology, and make certified
EHR technology less valuable and more burdensome for eligible hospitals
and eligible providers to use.'' \6\ The Congress also requested a
detailed report on health information blocking (referred to in this
final rule with comment period as ``the Information Blocking Report'').
In the report, which was submitted to the Congress on April 10,
2015,\7\ ONC concluded from its experience and available evidence that
some persons and entities--including some health care providers--are
knowingly and unreasonably interfering with the exchange or use of
electronic health information in ways that limit its availability and
use to improve health and health care.\8\
---------------------------------------------------------------------------
\5\ Public Law 113-235.
\6\ 160 Cong. Rec. H9047, H9839 (daily ed. Dec. 11, 2014)
(explanatory statement submitted by Rep. Rogers, chairman of the
House Committee on Appropriations, regarding the Consolidated and
Further Continuing Appropriations Act, 2015).
\7\ ONC, Report to Congress on Health Information Blocking
(April 10, 2015), available at https://www.healthit.gov/sites/default/files/reports/info_blocking_040915.pdf.
\8\ Id. at 33.
---------------------------------------------------------------------------
We explained in the proposed rule that the demonstration required
by section 106(b)(2) of the MACRA must provide substantial assurance
not only that certified EHR technology was connected in accordance with
applicable standards during the relevant EHR reporting period, but that
the health care provider acted in good faith to implement and use the
certified EHR technology in a manner that supported and did not
interfere with the electronic exchange of health information among
health care providers and with patients to improve quality and promote
care coordination (81 FR 28172). We proposed that such a demonstration
be made through an attestation (referred to in this section of the
preamble as the ``information blocking attestation''), which would
comprise three statements related to health information exchange and
information blocking, which were described in the proposed rule (81 FR
28172). Accordingly, we proposed to revise the definition of a
meaningful EHR user at Sec. 495.4 and to revise the corresponding
attestation requirements at Sec. 495.40(a)(2)(i)(I) and (b)(2)(i)(I)
to require this attestation for all EPs, eligible hospitals, and CAHs
under the Medicare and Medicaid EHR Incentive Programs, beginning with
attestations submitted on or after April 16, 2016. Further, we proposed
this attestation requirement (at Sec. 414.1375(b)(3)(ii)) for all
eligible clinicians under the advancing care information performance
category of MIPS, including eligible clinicians who report on the
advancing care information performance category as part of an APM
Entity group under the APM scoring standard, as discussed in section
II.E.5.h of the proposed rule (81 FR 28181).
We invited public comment on this proposal, including whether the
proposed attestation statements could provide the Secretary with
adequate assurances that an eligible clinician, EP, eligible hospital,
or CAH has complied with the statutory requirements for information
exchange. We also encouraged public comment on whether there are
additional facts or circumstances to which eligible clinicians, EPs,
eligible hospitals, and CAHs should be required to attest, or whether
there is additional information that they should be required to report.
Comment: A number of commenters expressed strong support for this
proposal and urged us to finalize the information blocking attestation
as proposed. Commenters anticipated that such an attestation would
discourage information blocking; encourage more robust sharing of
information among all members of a patient's care team; increase demand
for more open and interoperable health IT platforms and systems; and
strengthen efforts to enhance health care quality and value, including
the capturing and sharing of information about quality, costs, and
outcomes. One commenter stated that the information blocking
attestation would also help independent physicians compete by deterring
predatory information sharing policies or practices, especially by
large health systems or hospitals.
Many commenters expressed partial support for this proposal but
voiced concerns about the particular content or form of the information
blocking attestation as proposed. Several commenters stated that the
language of the attestation was unclear and should provide more detail
regarding the specific actions health care providers would be required
to attest. Conversely, several commenters (including some of the same
commenters) believe that the language of the attestation was too
prescriptive. Some commenters recommended revising or removing one or
more of the three statements that comprise the attestation. A few
commenters suggested that we finalize only the first statement--which
mirrors the statutory language in section 106(b)(2) of the MACRA--and
contended that the other statements were unnecessary or, alternatively,
go beyond what section 106(b)(2) requires.
Some commenters were opposed in principle to requiring health care
[[Page 77029]]
providers to attest to any statement regarding information blocking.
Most of these commenters insisted that such a requirement would impose
unnecessary burdens or unfair obligations on health care providers,
who, in the view of the commenters, are seldom responsible for
information blocking.
The majority of commenters, whether they supported or opposed the
proposal, stressed that certain factors that prevent interoperability
and the ability to successfully exchange and use electronic health
information are beyond the ability of a health care provider to
control. Many of these commenters stated that EHR vendors should be
required to submit an information blocking attestation because they
have greater control over these factors and, in the experience of some
commenters, are more likely to engage in information blocking.
Response: After consideration of the comments as well as the
statutory provisions cited above, and in consultation with ONC, we
believe the proposed attestation requirement is an appropriate and
effective means to implement the demonstration required by section
106(b)(2) of the MACRA; we are therefore finalizing this requirement as
proposed, as discussed in greater detail below and in our responses to
specific comments that follow.
As many commenters recognized, the information blocking concerns
expressed by Congress are serious and reflect a systemic problem: A
growing body of evidence establishes that persons and entities--
including some health care providers--have strong incentives to
unreasonably interfere with the exchange and use of electronic health
information, undermining federal programs and investments in the
meaningful use of certified EHR technology to improve health and the
delivery of care.\9\ While effectively addressing this problem will
require additional and more comprehensive measures,\10\ section
106(b)(2) of the MACRA represents an important first step towards
increasing accountability for certain types of information blocking in
the specific context of meaningful EHR users.
---------------------------------------------------------------------------
\9\ See, for example, Julia Adler-Milstein and Eric Pfeifer,
Information Blocking: Is it occurring and what policy strategies can
address it?, Milbank Quarterly (forthcoming Mar 2017) (reporting
results of national survey of health information leaders in which 25
percent of respondents experienced routine information blocking by
hospitals and health systems and over 50 percent of respondents
experienced routine information blocking by EHR vendors); American
Society of Clinical Oncology, Barriers to interoperability and
information blocking (2015), http://www.asco.org/sites/www.asco.org/files/position_paper_for_clq_briefing_09142015.pdf (describing a
growing number of reports from members concerning information
blocking and stating that preventing these practices ``is critically
important to ensuring that every patient with cancer receives the
highest quality health care services and support''); David C.
Kendrick, Statement to the Senate, Committee on Health, Education,
Labor, and Pensions, Achieving the promise of health information
technology: information blocking and potential solutions, Hearing
(Jul 23, 2015), available at http://www.help.senate.gov/hearings/achieving-the-promise-of-health-information-technology-information-blocking-and-potential-solutions (describing information blocking as
``intentional interruption or prevention of interoperability'' by
providers or EHR vendors and stating ``we have so many specific
experiences with inappropriate data blocking . . . that we have
created a nomenclature [to classify the most common types].'');
David C. Kibbe, Statement to Senate, Committee on Health, Education,
Labor, and Pensions, Achieving the promise of health information
technology: information blocking and potential solutions, Hearing
(Jul 23, 2015), available at http://www.help.senate.gov/hearings/achieving-the-promise-of-health-information-technology-information-blocking-and-potential-solutions (testifying that despite progress
in interoperable health information exchange, ``information blocking
by health care provider organizations and their EHRs, whether
intentional or not, is still a problem''); H.R. 6, 114th Cong. Sec.
3001 (as passed by House of Representatives, July 10, 2015)
(prohibiting information blocking and providing enforcement
mechanisms, including civil monetary penalties and decertification
of products); see also H.R. Rep. No. 114-190, pt. 1, at 126 (2015)
(reporting that provisions of H.R. 6 ``would refocus national
efforts on making systems interoperable and holding individuals
responsible for blocking or otherwise inhibiting the flow of patient
information throughout our healthcare system.''); Connecticut Public
Act No. 15-146 (enacted June 30, 2015) (making information blocking
an unfair trade practice, authorizing state attorney general to
bring civil enforcement actions for penalties and punitive damages);
ONC, Report to Congress on Health Information Blocking (April 10,
2015), available at https://www.healthit.gov/sites/default/files/reports/info_blocking_040915.pdf (``[B]ased on the evidence and
knowledge available, it is apparent that some health care providers
and health IT developers are knowingly interfering with the exchange
or use of electronic health information in ways that limit its
availability and use to improve health and health care. This conduct
may be economically rational for some actors in light of current
market realities, but it presents a serious obstacle to achieving
the goals of the HITECH Act and of health care reform.'')
\10\ See ONC, FY 2017: Justification of Estimates for
Appropriations Committee, https://www.healthit.gov/sites/default/files/final_onc_cj_fy_2017_clean.pdf (2016), Appendix I (explaining
that current law does not directly prohibit or provide an effective
means to investigate and address information blocking by EHR
vendors, health care providers, and other persons and entities, and
proposing that Congress prohibit and prescribe appropriate penalties
for these practices, including civil monetary penalties and program
exclusion).
---------------------------------------------------------------------------
The proposed information blocking attestation consists of three
statements that contain several specific representations about a health
care provider's implementation and use of certified EHR technology.
These representations, taken together, will enable the Secretary to
infer with reasonable confidence that the attesting health care
provider acted in good faith to support the appropriate exchange of
electronic health information and therefore did not knowingly and
willfully limit or restrict the compatibility or interoperability of
certified EHR technology.
We believe that this level of specificity is necessary and that a
more generalized attestation would not provide the necessary assurances
described above. This does not mean, however, that the information
blocking attestation imposes unnecessary or unreasonable requirements
on health care providers. To the contrary, we have carefully tailored
the attestation to the demonstration required by section 106(b)(2) of
the MACRA. In particular, the attestation focuses on whether a health
care provider acted in good faith to implement and use certified EHR
technology in a manner that supports interoperability and the
appropriate exchange of electronic health information. Recognizing that
a variety of factors may prevent the exchange or use of electronic
health information, and consistent with the focus of section 106(b)(2)
on actions that are knowing and willful, this good faith standard takes
into account health care providers' individual circumstances and does
not hold them accountable for consequences they cannot reasonably
influence or control.
For these and the additional reasons set forth in our responses to
comments immediately below, and subject to the clarifications therein,
we are finalizing this attestation requirement as proposed.
Comment: A number of commenters, several of whom expressed support
for our proposal, regarded the language of the attestation as quite
broad and stated that additional guidance may be needed to enable
health care providers to understand the actions they would be required
to attest.
Response: We agree that health care providers must be able to
understand and comply with program requirements. For this reason, the
information blocking attestation consists of three statements related
to health information exchange and the prevention of health information
blocking. These statements--which we are finalizing at Sec.
495.40(a)(2)(i)(I) for EPs, Sec. 495.40(b)(2)(i)(I) for eligible
hospitals and CAHs, and Sec. 414.1375(b)(3)(ii) for eligible
clinicians--contain specific representations about a health care
provider's implementation and use of certified EHR technology. We
believe that these statements, taken together, communicate with
appropriate specificity the actions health care providers must attest
to in order to
[[Page 77030]]
demonstrate that they have complied with the requirements established
by section 106(b)(2) of the MACRA. To provide further clarity, we set
forth and explain each of these statements in turn below.
Statement 1: A health care provider must attest that it
did not knowingly and willfully take action (such as to disable
functionality) to limit or restrict the compatibility or
interoperability of certified EHR technology.
This statement mirrors the language of section 106(b)(2) of the
MACRA. We note that except for one illustrative example (concerning
actions to disable functionality), the above statement does not contain
specific guidance as to the types of actions that are likely to ``limit
or restrict'' the compatibility or interoperability of certified EHR
technology, nor the circumstances in which a health care provider who
engages in such actions does so ``knowingly and willfully.'' The
information blocking attestation supplements the foregoing statement
with two more detailed statements concerning the specific actions a
health care provider took to support interoperability and the exchange
of electronic health information.
Statement 2: A health care provider must attest that it
implemented technologies, standards, policies, practices, and
agreements reasonably calculated to ensure, to the greatest extent
practicable and permitted by law, that the certified EHR technology
was, at all relevant times: (1) Connected in accordance with applicable
law; (2) compliant with all standards applicable to the exchange of
information, including the standards, implementation specifications,
and certification criteria adopted at 45 CFR part 170; (3) implemented
in a manner that allowed for timely access by patients to their
electronic health information (including the ability to view, download,
and transmit this information); and (4) implemented in a manner that
allowed for the timely, secure, and trusted bi-directional exchange of
structured electronic health information with other health care
providers (as defined by 42 U.S.C. 300jj(3)), including unaffiliated
health care providers, and with disparate certified EHR technology and
vendors.
This statement focuses on the manner in which a health care
provider implemented its certified EHR technology during the relevant
reporting period, which is directly relevant to whether the health care
provider took any actions to limit or restrict the compatibility or
interoperability of the certified EHR technology. By attesting to this
statement, a health care provider represents that it acted in good
faith to implement its certified EHR technology in a manner that
supported--and did not limit or restrict--access to and the exchange of
electronic health information, to the extent that such access or
exchange was appropriate (that is, practicable under the circumstances
and authorized, permitted, or required by law). More specifically, the
health care provider represents that it took reasonable steps
(including working with its health IT developer and others as
necessary) to verify that its certified EHR technology was connected
(that is, implemented and configured) in accordance with applicable
standards and law.
In addition to verifying that certified EHR technology was
connected and accessible during the relevant reporting period, a health
care provider must represent that it took reasonable steps to implement
corresponding technologies, standards, policies, practices, and
agreements to enable the use of certified EHR technology, including by
patients and by other health care providers, and not to limit or
restrict appropriate access to or use of information in the health care
provider's certified EHR technology. For example, actions to limit or
restrict compatibility or interoperability could include implementing
or configuring certified EHR technology so as to limit access to
certain types of data elements or to the ``structure'' of the data, or
implementing certified EHR technology in ways that limit the types of
persons or entities that may be able to access and exchange
information, or the types of technologies through which they may do so.
Statement 3: A health care provider must attest that it
responded in good faith and in a timely manner to requests to retrieve
or exchange electronic health information, including from patients,
health care providers (as defined by 42 U.S.C. 300jj(3)), and other
persons, regardless of the requestor's affiliation or technology
vendor.
This third and final statement builds on a health care provider's
representations concerning the manner in which its certified EHR
technology was implemented by focusing on how the health care provider
actually used the technology during the relevant reporting period. By
attesting to this statement, a health care provider represents that it
acted in good faith to use the certified EHR technology to support the
appropriate exchange and use of electronic health information. This
includes, for example, taking reasonable steps to respond to requests
to access or exchange information, provided that such access or
exchange is appropriate, and not unreasonably discriminating on the
basis of the requestor's affiliation, technology vendor, or other
characteristics, as described in the statement.
We provide further discussion and analysis of the foregoing
statements and their application in our responses to the specific
comments summarized in the remainder of this section. We believe that
these statements, taken together, provide a clear and appropriately
detailed description of a health care provider's obligations under
section 106(b)(2) of the MACRA, will enable them to demonstrate
compliance to the satisfaction of the Secretary, and will promote fair
and consistent application of program requirements across all attesting
health care providers.
Comment: Several commenters asked us to identify the specific
actions and circumstances that would support a finding that a health
care provider has knowingly and willfully limited or restricted the
compatibility or interoperability of certified EHR technology. Some
commenters inquired whether this determination would turn on a health
care provider's individual circumstances or other case-by-case
considerations, such as a health care provider's practice size,
setting, specialty, and level of technology adoption. Commenters also
asked whether other circumstances could justify limitations or
restrictions on the compatibility or interoperability of certified EHR
technology. For example, a commenter asked whether an office-based
clinic that periodically turns its computer network off overnight to
perform system maintenance would be deemed to have limited the
interoperability of its certified EHR technology on the basis that
other health care providers might be unable to request and retrieve
records during that time. Commenters gave other potential
justifications for blocking access to or the exchange of information,
such as privacy or security concerns or the need to temporarily block
the disclosure of sensitive test results to allow clinicians who order
tests an opportunity to discuss the results with their patients prior
to sharing the results with other health care providers.
One commenter suggested that we approach this question in the
manner described in the Information Blocking Report, which focuses on
whether actions that interfere with the exchange or use of electronic
health information have any objectively reasonable justification.
Response: The compatibility or interoperability of certified EHR
[[Page 77031]]
technology may be limited or restricted in ways that are too numerous
and varied to catalog. While section 106(b)(2) of the MACRA
specifically mentions actions to disable the functionality of certified
EHR technology, other actions that are likely to interfere with the
exchange or use of electronic health information could limit or
restrict compatibility or interoperability. For example, the
Information Blocking Report describes certain categories of business,
technical, and organizational practices that are inherently likely to
interfere with the exchange or use of electronic health
information.\11\ These practices include but are not limited to:
---------------------------------------------------------------------------
\11\ ONC, Report to Congress on Health Information Blocking
(April 10, 2015) at 13, available at https://www.healthit.gov/sites/default/files/reports/info_blocking_040915.pdf.
---------------------------------------------------------------------------
Contract terms, policies, or other business or
organizational practices that restrict individuals' access to their
electronic health information or restrict the exchange or use of that
information for treatment and other permitted purposes.
Charging prices or fees that make exchanging and using
electronic health information cost prohibitive.
Implementing certified EHR technology in non-standard ways
that are likely to substantially increase the costs, complexity, or
burden of sharing electronic health information (especially when
relevant interoperability standards have been adopted by the
Secretary).
Implementing certified EHR technology in ways that are
likely to ``lock in'' users or electronic health information (including
using certified EHR technology to inappropriately limit or steer
referrals).
Such actions would be contrary to section 106(b)(2) only when
engaged in ``knowingly and willfully.'' We believe the purpose of this
requirement is to ensure that health care providers are not penalized
for actions that are inadvertent or beyond their control.
To illustrate these concepts, we consider several hypothetical
scenarios raised by the commenters. First, we consider the situation
suggested by one commenter in which a health care provider disables its
computer network overnight to perform system maintenance. In this
situation, the health care provider knows that the natural and probable
consequence of its actions will be to prevent access to information in
the certified EHR technology and in this way limit and restrict the
interoperability of the technology. However, we recognize that health
IT requires maintenance to ensure that capabilities function properly,
including in accordance with applicable standards and law. We also
appreciate that in many cases it may not be practicable to implement
redundant capabilities and systems for all functionality within
certified EHR technology, especially for physician practices and other
health care providers with comparatively less health IT resources and
expertise. Assuming that a health care provider acts in good faith to
disable functionality for the purpose of performing system maintenance,
it is unlikely that the health care provider would knowingly and
willfully limit or restrict the compatibility or interoperability of
the certified EHR technology. We note that our assumption that the
health care provider acted in good faith presupposes that it did not
disable functionality except to the extent and for the duration
necessary to ensure the proper maintenance of its certified EHR
technology, and that it took reasonable steps to minimize the impact of
such maintenance on the ability of patients and other health care
providers to appropriately access and exchange information, such as by
scheduling maintenance overnight and responding to any requests for
access or exchange once the maintenance has been completed and it is
otherwise practicable to do so.
Next, we consider the situation in which a health care provider
blocks access to information in its certified EHR technology due to
concerns related to the security of the information. Depending on the
circumstances, certain access restrictions may be reasonable and
necessary to protect the security of information maintained in
certified EHR technology. In contrast, restrictions that are
unnecessary or unreasonably broad could constitute a knowing and
willful restriction of the compatibility or interoperability of the
certified EHR technology. Because of the complexity of these issues,
determining whether a health care provider's actions were reasonable
would require additional information about the health care provider's
actions and the circumstances in which they took place.
As a final example, we consider whether it would be permissible for
a health care provider to restrict access to a patient's sensitive test
results until the clinician who ordered the tests, or another
designated health care professional, has had an opportunity to review
and appropriately communicate the results to the patient. We assume for
purposes of this example that, consistent with the HIPAA Privacy Rule,
the restriction does not apply to the patient herself or to the
patient's request in writing to send this information to any other
person the patient designates. With that assumption and under the
circumstances we have described, it is likely that the health care
provider is knowingly restricting interoperability. We believe that the
restriction may be reasonable so long as the health care provider
reasonably believes, based on its relationship with the particular
patient and its best clinical judgment, that the restriction is
necessary to protect the health or wellbeing of the patient. We note
that our analysis would be different if the restriction were not based
on a health care provider's individualized assessment of the patient's
best interests and instead reflected a blanket policy to block access
to test results until released by the ordering physician. Similarly,
while clinical judgment and the health care provider-patient
relationship are entitled to substantial deference, they may not be
used as a pretext for limiting or restricting the compatibility or
interoperability of certified EHR technology.
The examples provided in this section of the final rule with
comment period are intended to be illustrative. We reiterate the need
to consider the unique facts and circumstances in each case in order to
determine whether a health care provider knowingly and willfully
limited or restricted the compatibility or interoperability of
certified EHR technology.
Comment: One commenter asked whether the requirement that certified
EHR technology complies with federal standards precludes the use of
other standards for the exchange of electronic health information.
Response: In general, while certified EHR technology must be
connected in accordance with applicable federal standards, this
requirement does not preclude the use of other standards or
capabilities, provided the use of such standards or capabilities does
not limit or restrict the compatibility or interoperability of the
certified EHR technology.
Comment: Several commenters requested that we clarify our
expectations for timeliness of access to or exchange of information.
Response: As we have explained, whether a health care provider has
knowingly and willfully limited or restricted the interoperability of
certified EHR technology will depend on the relevant facts and
circumstances. While for this reason we decline to
[[Page 77032]]
adopt any bright-line rules, we reiterate that a health care provider
must attest that it responded in good faith and in a timely manner to
requests to retrieve or exchange electronic health information. What
will be ``timely'' will of course vary based on relevant factors such
as a health care provider's level of technology adoption and the types
of information requested. For requests from patients, we note that
while the HIPAA Privacy Rule provides that a covered entity may take up
to 30 days to respond to a patient's written request for access to his
or her PHI maintained by the covered entity, it is expected that the
use of technology will enable the covered entity to fulfill the
individual's request in far fewer than 30 days.\12\ Where information
requested or directed by a patient can be readily provided using the
capabilities of certified EHR technology, access should in most cases
be immediate and in all cases as expeditious as is practicable under
the circumstances.
---------------------------------------------------------------------------
\12\ HHS Office for Civil Rights, Individuals' Right under HIPAA
to Access their Health Information 45 CFR 164.524, http://www.hhs.gov/hipaa/for-professionals/privacy/guidance/access/index.html (last accessed Sept. 6, 2016).
---------------------------------------------------------------------------
Comment: Many commenters stated that health care professionals and
organizations should not be held responsible for adherence to health IT
certification standards or other technical details of health IT
implementation that are beyond their expertise or control. According to
these commenters, requiring health care providers to attest to these
technical implementation details would unfairly place them at financial
risk for factors that are beyond the scope of their medical training.
Additionally, many commenters took the position that EHR vendors are in
the best position to ensure that certified EHR technology is connected
in accordance with applicable law and compliant with applicable
standards, implementation specifications, and certification criteria.
Response: We reiterate that a health care provider will not be held
accountable for factors that it cannot reasonably influence or control,
including the actions of EHR vendors. Nor do we expect health care
providers themselves to have any special technical expertise or to
personally tend to the technical details of their health IT
implementations. We do expect, however, that a health care provider
will take reasonable steps to verify that the certified EHR technology
is connected (that is, implemented and configured) in accordance with
applicable standards and law and in a manner that will allow the health
care provider to attest to having satisfied the conditions described in
the information blocking attestation. In this respect, a health care
provider's obligations include communicating these requirements to
health IT developers, implementers, and other persons who are
responsible for implementing and configuring the health care provider's
certified EHR technology. In addition, the health care provider should
obtain adequate assurances from these persons to satisfy itself that
its certified EHR technology was connected in accordance with
applicable standards and law and in a manner that will enable the
health care provider to demonstrate that it has not knowingly and
willfully take action to limit or restrict the compatibility or
interoperability of certified EHR technology.
Comment: Several commenters supported the attestation's emphasis on
the bi-directional exchange of structured electronic health
information. Multiple commenters suggested that this requirement would
expand access to relevant information by members of a patient's care
team, allowing them to deliver more effective and comprehensive care,
enhance health outcomes, and contribute directly to the goals of
quality and affordability. As an example, commenters stated that the
bi-directional exchange of information among pharmacists and other
clinicians can provide important information for comprehensive
medication management.
Other commenters opposed or raised concerns regarding this aspect
of our proposal, stating that bi-directional information exchange may
not be feasible for many health care providers or may raise a variety
of technical and operational challenges and potential privacy or
security concerns.
Some commenters requested that CMS clarify the term ``bi-
directional exchange'' and the actions a health care provider would be
expected to take to satisfy this aspect of the attestation. One
commenter inquired specifically whether bi-directional exchange could
include using a health information exchange or other intermediary to
connect disparate certified EHR technology so that users could both
send and receive information in an interoperable manner. If so, the
commenter asked whether a health care provider would be expected to
participate in multiple arrangements of this kind (and, if so, how
many). Multiple commenters stated that it is not appropriate to allow
bi-directional exchange in all circumstances and that privacy,
security, safety, and other considerations require health care
providers to restrict the types of information that the certified EHR
technology will accept and the persons or other sources of that
information.
Response: We appreciate that bi-directional exchange of information
presents challenges, including the need to validate the authenticity,
accuracy, and integrity of data received from outside sources,
mitigating potential privacy and security risks, and overcoming
technical, workflow, and other related challenges. We also acknowledge
that accomplishing bi-directional exchange may be challenging for
certain health care providers or for certain types of information or
use cases. However, a significant number of health care providers are
already exchanging some types of electronic health information in a bi-
directional manner. Based upon data collected in 2014, approximately
one-fifth of non-federal acute care hospitals electronically sent,
received, found (queried), and were able to easily integrate summary of
care records into their EHRs.\13\ We also note that meaningful EHR
users are required to use certified EHR technology that has the
capacity to ``exchange electronic health information with, and to
integrate such information from other sources,'' as required by the
2014 and 2015 Edition Base EHR definitions at 45 CFR 170.102 and
corresponding certification criteria, such as the transitions of care
criteria (45 CFR 170.314(b)(1) and (2) (2014 Edition) and 45 CFR
170.315(b)(2) (2015 Edition)).
---------------------------------------------------------------------------
\13\ Charles D, Swain M Patel V. (August 2015) Interoperability
among U.S. Non-federal Acute Care Hospitals. ONC Data Brief, No. 25
ONC: Washington DC. https://www.healthit.gov/sites/default/files/briefs/onc_databrief25_interoperabilityv16final_081115.pdf Similar
data for office-based physicians will be available in 2016. ONC,
Request for Information Regarding Assessing Interoperability for
MACRA, 81 FR 20651 (April 8, 2016).
---------------------------------------------------------------------------
We expect these trends to increase as standards and technologies
improve and as health care providers, especially those participating in
Advanced APMs, seek to obtain more complete and accurate information
about their patients with which to coordinate care, manage population
health, and engage in other efforts to improve quality and value.
We clarify that bi-directional exchange may include using certified
EHR technology with a health information exchange or other intermediary
to connect disparate certified EHR technology so that users could both
send and receive information in an interoperable manner. Whether a
health care provider could participate in
[[Page 77033]]
arrangements of this kind, or multiple arrangements, would depend on
its particular circumstances, including its technological capabilities
and sophistication, its financial resources, its role within the local
health care community, and the availability of state or regional health
information exchange infrastructure, among other relevant factors. A
health care provider is not obligated to participate in every
information sharing arrangement or to accommodate every request to
connect via a custom interface. On the other hand, a health care
provider with substantial resources that refuses to participate in any
health information exchange efforts might invite scrutiny if, combined
with other relevant facts and circumstances, there were reason to
suspect that the health care provider's refusal to participate in
certain health information exchange efforts were part of a larger
pattern of behavior or a course of conduct to knowingly and willfully
limit the compatibility or interoperability of the certified EHR
technology.
Comment: Several commenters were concerned about the requirement to
respond to requests to retrieve or exchange electronic health
information. Commenters stated that health care providers may have
difficulty responding to requests from unaffiliated health care
providers or from EHR vendors with whom they do not have a business
associate agreement.
A few commenters were concerned that health care providers may be
penalized for limiting or restricting access to information despite not
knowing whether an unaffiliated health care provider or EHR vendor is
authorized or permitted to access a patient's PHI. Another commenter
noted that some state laws require written patient consent before
certain types of health information may be exchanged electronically.
Some commenters contested the technical feasibility of exchanging
information with unaffiliated health care providers and across
disparate certified EHR technologies, explaining that federally-adopted
standards such as the Direct standard do not support such robust
information sharing. In particular, there is no widely-accepted and
standardized method to encode requests in Direct messages, which means
that a receiving system will often be unable to understand what
information is being requested.
Response: The ability to exchange and use information across
multiple systems and health care organizations is integral to the
concept of interoperability and, consequently, to a health care
provider's demonstration under section 106(b)(2) of the MACRA.
Consistent with its attestation, a health care provider must implement
technologies, standards, policies, practices, and agreements reasonably
calculated to ensure, to the greatest extent practicable and permitted
by law, that the certified EHR technology was, at all relevant times
implemented in a manner that allowed for timely access by patients to
their electronic health information (including the ability to view,
download, and transmit this information) and implemented in a manner
that allowed for the timely, secure, and trusted bi-directional
exchange of structured electronic health information with other health
care providers, including unaffiliated providers, and with disparate
certified EHR technology and vendors.
We recognize that technical, legal, and other practical constraints
may prevent a health care provider from responding to some requests to
access, exchange, or use electronic health information in a health care
provider's certified EHR technology, even when the requester has
permission or the right to access and use the information. We reiterate
that in these circumstances a health care provider probably would not
have knowingly and willfully limited or restricted the compatibility or
interoperability of the certified EHR technology. We expect that these
technical and other challenges will become less significant over time
and that health care providers will be able to respond to requests from
an increasing range of health care providers and health IT systems.
In response to the concerns regarding the disclosure of PHI without
a business associate agreement, we remind commenters that the HIPAA
Privacy Rule expressly permits covered entities to disclose PHI for
treatment, payment, and operations. We refer commenters to numerous
guidance documents and fact sheets issued by the HHS Office for Civil
Rights and ONC on this subject.\14\ We also caution that
mischaracterizing or misapplying the HIPAA Privacy Rule or other legal
requirements in ways that are likely to limit or restrict the
compatibility or interoperability of certified EHR technology might be
inconsistent with the requirements of section 106(b)(2) of the MACRA
and a health care provider's information blocking attestation. As an
example, a health system that maintains a policy or practice of
refusing to share PHI with unaffiliated health care providers on the
basis of generalized and unarticulated ``HIPAA compliance concerns''
could be acting contrary to section 106(b)(6) and the information
blocking attestation. The same would be true were a health care
provider to inform a patient that it is unable to share information
electronically with the patient's other health care professionals ``due
to HIPAA.''
---------------------------------------------------------------------------
\14\ See, e.g., HHS Office for Civil Rights, Understanding Some
of HIPAA's Permitted Uses and Disclosures, http://www.hhs.gov/hipaa/for-professionals/privacy/guidance/permitted-uses/index.html (last
accessed Sept. 1, 2016); see also Lucia Savage and Aja Brooks, The
Real HIPAA Supports Interoperability, Health IT Buzz Blog, https://www.healthit.gov/buzz-blog/electronic-health-and-medical-records/interoperability-electronic-health-and-medical-records/the-real-hipaa-supports-interoperability/ (last accessed Sept. 1, 2016).
---------------------------------------------------------------------------
Comment: A small number of commenters, primarily health IT
developers, recommended that any requirements to exchange information
be limited to the use of certified health IT capabilities required by
the 2015 Edition health IT certification criteria or 2014 Edition EHR
certification criteria (45 CFR 170.102), as applicable. In contrast, a
commenter stated that a significant amount of health information is
exchanged through means other than the standards and capabilities
supported by ONC's certification criteria for health IT. The commenter
cited as an example the widespread use of health information exchanges
(HIEs) and network-to-network exchanges, which may or may not
incorporate the use of certified health IT capabilities. The commenter
insisted that these approaches should not be regarded as information
blocking and should be treated as evidence that a health care provider
is supporting and participating in efforts to exchange electronic
health information. Another commenter stated that the requirement to
respond to requests to retrieve or exchange electronic health
information should be satisfied by connecting certified EHR technology
to a network that can be accessed by other health care providers.
Response: We decline to limit the attestation to the use of
certified health IT capabilities or to give special weight to any
particular form or method of exchange. As observed by the commenters,
certified EHR technology may be implemented and used in many different
ways that support the exchange and use of electronic health
information. A health care provider's use of these forms and methods of
exchange may be relevant to determining whether it acted in good faith
to implement and use its certified EHR technology in a manner that
supported and did not limit or restrict the compatibility or
interoperability of the technology. As an example, certified
[[Page 77034]]
EHR technology may come bundled with a health information service
provider (HISP) that limits the ability to send and receive Direct
messages to certain health care providers, such as those whose EHR
vendor participates in a particular trust network. To overcome this or
other technical limitations, a health care provider may participate in
a variety of other health information sharing arrangements, whether to
expand the reach of its Direct messaging capabilities or to enable
other methods of exchanging and using electronic health information in
its certified EHR technology. We believe that these and similar actions
may be relevant to and should not be excluded from the consideration of
the health care provider's overall actions to enable the
interoperability of its certified EHR technology and to respond in good
faith to requests to access or exchange electronic health information.
Comment: Some commenters recommended that we revise the language of
the attestation in whole or in part. Most of these commenters suggested
removing certain language or statements, or combining them, to make the
requirements of the attestation easier to understand or comply with.
One commenter suggested that we abandon the proposed language and adopt
the commenter's alternative language, which would require health care
providers to attest that they established a workflow for responding to
requests to retrieve or exchange electronic health information and did
not knowingly or willfully limit or restrict the compatibility or
interoperability of certified EHR technology during the development or
implementation of the workflow, or in any subsequent actions related to
the workflow.
Response: We appreciate commenters' suggestions, but for the
reasons we have explained, we do not believe it is appropriate to
remove or to further simplify the language of the attestation. Although
we do not adopt the alternative language suggested by one commenter, we
observe that the actions the commenter describes are consistent with
our expectation that health care providers implement certified EHR
technology in a manner reasonably calculated to facilitate
interoperability, to the greatest extent practicable, and respond in
good faith to requests to retrieve or exchange information.
Comment: Several commenters claimed that the proposed attestation
is not necessary because most health care providers are not knowingly
or willfully engaging in actions to limit or restrict the
interoperability or compatibility of certified EHR technology, or to
otherwise interfere with the exchange or use of electronic health
information. Some of these commenters, while acknowledging that some
health care providers may be engaging in actions that could limit or
restrict the interoperability or compatibility of certified EHR
technology, maintained that such actions are justified or are beyond a
health care provider's control. Some commenters supported an
attestation for hospitals or health systems but not for physicians, on
the basis that the majority of individual EHR users are not engaging in
information blocking.
Response: The belief that health care providers do not engage in
information blocking is contradicted by an increasing body of evidence
and research, by the experience of CMS and ONC, and by many of the
comments on this proposal.\15\ It is also inconsistent with section
106(b)(2) of the MACRA, which is entitled ``Preventing Blocking The
Sharing Of Information'' and expressly requires health care providers
to demonstrate that they did not knowingly and willingly take action to
limit or restrict the interoperability of certified EHR technology.
---------------------------------------------------------------------------
\15\ See, for example, Julia Adler-Milstein and Eric Pfeifer, et
al. referenced in this final rule with comment period.
---------------------------------------------------------------------------
We need not contemplate whether health systems or any other class
of health care provider is more predisposed to engage in information
blocking, because the attestation we are finalizing implements section
106(b)(2) of the MACRA, which extends to all MIPS eligible clinicians,
eligible clinicians part of an APM Entity, EPs, eligible hospitals, and
CAHs.
Comment: Some commenters suggested that, in lieu of an attestation,
that CMS allow health care providers to demonstrate compliance with
section 106(b)(2) by reporting on objectives and measures under the
Medicare and Medicaid EHR Incentive Programs or the advancing care
information performance category of MIPS. Commenters noted that health
care providers participating in these programs must utilize CEHRT,
including application programing interfaces (APIs) that provide access
to patient data, and that participation in these programs should itself
provide an adequate assurance that health care providers are not
knowingly and willfully limiting or restricting the compatibility or
interoperability of certified EHR technology.
Response: We do not believe that a health care provider's reporting
of objectives and measures can provide the demonstration required by
section 106(b)(2) of the MACRA. The compatibility or interoperability
of certified EHR technology may be limited or restricted in numerous
and varied ways that are difficult to anticipate and that may not be
reflected in objectives and measures under the EHR Incentive Programs
and MIPS, which address a broad range of aspects related to the use of
certified health IT. It is therefore entirely possible that a health
care provider could implement and use certified EHR technology and meet
relevant objectives and measures while still engaging in many actions
that limit or restrict compatibility or interoperability. While in
theory we could specify additional objectives and measures specifically
related to the prevention of health information blocking, at this time
we believe a less burdensome and more effective way to obtain adequate
assurances that health care providers have not engaged in these
prohibited practices is through the information blocking attestation we
proposed and are finalizing.
Comment: Many commenters stated that EHR vendors, not health care
providers, are the primary cause of existing barriers to
interoperability and information exchange. Many of these commenters
stated that EHR vendors are engaging in information blocking, with some
commenters alleging that EHR vendors are routinely engaging in these
practices. Commenters alleged that EHR vendors are unwilling to share
data in certain circumstances or charge fees that make such sharing
cost-prohibitive for most physicians, which poses a significant barrier
to interoperability and the efficient exchange of electronic health
information.
For these reasons, many commenters suggested that CMS or ONC to
require EHR vendors and other health IT developers to attest to an
information blocking attestation or to impose other requirements and
penalties on developers to deter them from limiting or restricting the
interoperability of certified EHR technology and to encourage them to
proactively facilitate the sharing of electronic health information.
For example, commenters supported the decertification of EHR vendors
that charge excessive fees or engage in other practices that may
constitute information blocking.
Response: We agree that eligible clinicians, EPs, eligible
hospitals, and CAHs are by no means the only persons or entities that
may engage in information blocking. However, requirements for EHR
vendors or other health IT developers are beyond the
[[Page 77035]]
scope of section 106(b)(2) of the MACRA and this rulemaking.
We note a series of legislative proposals included in the
President's Fiscal Year 2017 Budget would prohibit information blocking
by health IT developers and others and to provide civil monetary
penalties and other remedies to deter this behavior.\16\ In addition,
ONC has taken a number of immediate actions to expose and discourage
information blocking by health IT developers, including requiring
developers to disclose material information about limitations and types
of costs associated with their certified health IT (see 45 CFR
170.523(k)(1); see also 80 FR 62719) and requiring ONC-ACBs to conduct
more extensive and more stringent surveillance of certified health IT,
including surveillance of certified health IT ``in the field'' (see 45
CFR 170.556; see also 80 FR 62707). ONC has also published resources,
including a new guide to EHR contracts that can assist health care
providers to compare EHR vendors and products and negotiate appropriate
contract terms that do not block access to data or otherwise impair the
use of certified EHR technology.\17\
---------------------------------------------------------------------------
\16\ See ONC, FY 2017: Justification of Estimates for
Appropriations Committee, https://www.healthit.gov/sites/default/files/final_onc_cj_fy_2017_clean.pdf (2016), Appendix I (explaining
that current law does not directly prohibit or provide an effective
means to investigate and address information blocking by EHR
vendors, health care providers, and other persons and entities, and
proposing that Congress prohibit and prescribe appropriate penalties
for these practices, including civil monetary penalties and program
exclusion).
\17\ ONC, EHR Contracts Untangled: Selecting Wisely, Negotiating
Terms, and Understanding the Fine Print (Sept. 2016), available at
https://www.healthit.gov/sites/default/files/EHR_Contracts_Untangled.pdf.
---------------------------------------------------------------------------
Comment: Several commenters requested clarification regarding the
documentation that would be required to demonstrate compliance with the
terms of the attestation so that health care providers could both
better understand and prepare for an audit of this requirement. Among
other topics, commenters requested guidance on expected documentation
requirements related to particular technologies or capabilities as well
as a health care provider's responsiveness to requests to exchange
information.
Response: We acknowledge commenters' concerns about required
documentation in cases of an audit. To alleviate those concerns, we
clarify that we will provide guidance to auditors relating to the final
policy and the attestation process. This instruction should include
requiring auditors to work closely with health care providers on the
supporting documentation needed applicable to the health care
provider's individual case. We further stress that audit determinations
are made on a case by case basis, which allows us to give individual
consideration to each health care provider. We believe that such case-
by-case review will allow us to adequately account for the varied
circumstances that may be relevant to assessing compliance.
Comment: Some commenters stated that it would be inappropriate for
ONC or an ONC-ACB to perform surveillance of a health care provider's
certified EHR technology to determine whether the health care provider
is limiting or restricting interoperability.
Response: The scope of ONC-ACB surveillance or, if finalized, ONC's
review of a health care provider's certified EHR technology is limited
to determining whether the technology continues to perform in
accordance with the requirements of the ONC Health IT Certification
Program. Because this oversight focuses on the performance of the
technology itself, not on the actions of health care providers or users
of the technology, we do not anticipate that information obtained in
the course of such ONC-ACB surveillance or ONC review would be used to
audit a health care provider's compliance with its information blocking
attestation. As a caveat, we acknowledge that if ONC became aware that
a health care provider had submitted a false attestation or engaged in
other actions in violation of federal law or requirements, ONC could
share that information with relevant federal entities.
Comment: Some commenters asked how often attestations would be
required (for example, once per year). Commenters also stated that the
information blocking attestation should apply prospectively, possibly
beginning with reporting periods commencing in 2017, to provide
reasonable notice to affected parties.
Response: MIPS eligible clinicians, eligible clinicians part of an
APM Entity, EPs, eligible hospitals, and CAHs must submit an
information blocking attestation covering each reporting period during
which they seek to demonstrate that they were a meaningful EHR user or
for which they seek to report on the advancing care information
performance category. We agree that the attestation requirements should
apply only to actions occurring after the effective date of this final
rule with comment period. For this reason and to promote alignment with
other reporting requirements, we are finalizing the information
blocking attestation for attestations covering EHR reporting periods
and MIPS performance periods beginning on or after January 1, 2017.
After review and consideration of public comment, we are finalizing
the attestation requirement as proposed. We are finalizing this
requirement for EPs, eligible hospitals, and CAHs under the Medicare
and Medicaid EHR Incentive Programs and for eligible clinicians under
the advancing care information performance category in MIPS, including
eligible clinicians who report on the advancing care information
performance category as part of an APM Entity group under the APM
scoring standard. We are finalizing this requirement for attestations
covering EHR reporting periods and MIPS performance periods beginning
on or after January 1, 2017.
We have revised and are finalizing the proposed regulation text
accordingly. Specifically, we are finalizing the revisions to the
definition of a meaningful EHR user at Sec. 495.4 and we are adding
the same to the definition of a meaningful EHR user for MIPS at Sec.
414.1305. We are finalizing the attestation requirements at Sec.
495.40(a)(2)(i)(I) and (b)(2)(i)(I) to require such an attestation from
EPs, eligible hospitals, and CAHs as part of their demonstration of
meaningful EHR use under the Medicare and Medicaid EHR Incentive
Programs. We are also finalizing Sec. 414.1375(b)(3) to require this
attestation from all eligible clinicians under the advancing care
information performance category of MIPS, including eligible clinicians
who report on the advancing care information performance category as
part of an APM Entity group under the APM scoring standard as discussed
in section II.E.5.h. of this final rule with comment period.
D. Definitions
At Sec. 414.1305, in subpart O, we proposed definitions for the
following terms:
Additional performance threshold.
Advanced Alternative Payment Model (Advanced APM).
Advanced APM Entity.
Affiliated practitioner.
Affiliated practitioner list.
Alternative Payment Model (APM).
APM Entity.
APM Entity group.
APM Incentive Payment.
Attestation.
Attributed beneficiary.
Attribution-eligible beneficiary.
Certified Electronic Health Record Technology (CEHRT).
[[Page 77036]]
CMS-approved survey vendor.
CMS Web Interface.
Covered professional services.
Eligible clinician.
Episode payment model.
Estimated aggregate payment amounts.
Final score.
Group.
Health Professional Shortage Areas (HPSA).
High priority measure.
Hospital-based MIPS eligible clinician.
Improvement activities.
Incentive payment base period.
Low-volume threshold.
Meaningful EHR user for MIPS.
Measure benchmark.
Medicaid APM.
Medical Home Model.
Medicaid Medical Home Model.
Merit-based Incentive Payment System (MIPS).
MIPS APM.
MIPS eligible clinician.
MIPS payment year.
New Medicare-Enrolled MIPS eligible clinician.
Non-patient facing MIPS eligible clinician.
Other Payer Advanced APM.
Other payer arrangement.
Partial Qualifying APM Participant (Partial QP).
Partial QP patient count threshold.
Partial QP payment amount threshold.
Participation List.
Performance category score.
Performance standards.
Performance threshold.
Qualified Clinical Data Registry (QCDR).
Qualified registry.
QP patient count threshold.
QP payment amount threshold.
QP Performance Period.
Qualifying APM Participant (QP).
Rural areas.
Small practices.
Threshold Score.
Topped out non-process measure.
Topped out process measure.
Some of these terms are new in conjunction with MIPS and APMs,
while others are used in existing CMS programs. For the new terms and
definitions, we note that some of them have been developed alongside
policies of this regulation while others are defined by statute.
Specifically, the following terms and definitions were established by
the MACRA: APM, Eligible Alternative Payment Entity (which we refer to
as an Advanced APM Entity), Composite Performance Score (which we refer
to as final score), Eligible professional or EP (which we refer to as
an eligible clinician), MIPS Eligible professional or MIPS EP (which we
refer to as a MIPS eligible clinician), MIPS adjustment factor (which
we refer to as a MIPS payment adjustment factor), additional positive
MIPS payment adjustment factor (which we refer to as additional MIPS
payment adjustment factor), Qualifying APM Participant, and Partial
Qualifying APM Participant.
These terms and definitions are discussed in detail in relevant
sections of this final rule with comment period.
E. MIPS Program Details
1. MIPS Eligible Clinicians
We believe a successful MIPS program fully equips clinicians
identified as MIPS eligible clinicians with the tools and incentives to
focus on improving health care quality, efficiency, and patient safety
for all their patients. Under MIPS, MIPS eligible clinicians are
incentivized to engage in proven improvement measures and activities
that impact patient health and safety and are relevant for their
patient population. One of our strategic goals in developing the MIPS
program is to advance a program that is meaningful, understandable, and
flexible for participating MIPS eligible clinicians. One way we believe
this will be accomplished is by minimizing MIPS eligible clinicians'
burden. We have made an effort to focus on policies that remove as much
administrative burden as possible from MIPS eligible clinicians and
their practices while still providing meaningful incentives for high-
quality, efficient care. In addition, we hope to balance practice
diversity with flexibility to address varied MIPS eligible clinicians'
practices. Examples of this flexibility include special consideration
for non-patient facing MIPS eligible clinicians, an exclusion from MIPS
for eligible clinicians who do not exceed the low-volume threshold, and
other proposals discussed below.
a. Definition of a MIPS Eligible Clinician
Section 1848(q)(1)(C)(i) of the Act, as added by section 101(c)(1)
of the MACRA, outlines the general definition of a MIPS eligible
clinician for the MIPS program. Specifically, for the first and second
year for which MIPS applies to payments (and the performance period for
such years) a MIPS eligible clinician is defined as a physician (as
defined in section 1861(r) of the Act), a physician assistant, nurse
practitioner, clinical nurse specialist (as such terms are defined in
section 1861(aa)(5) of the Act), a certified registered nurse
anesthetist (as defined in section 1861(bb)(2) of the Act), and a group
that includes such professionals. The statute also provides flexibility
to specify additional eligible clinicians (as defined in section
1848(k)(3)(B) of the Act) as MIPS eligible clinicians in the third and
subsequent years of MIPS. As discussed in the proposed rule (81 FR
28177 through 28178), section 1848(q)(1)(C)(ii) and (v) of the Act
specifies several exclusions from the definition of a MIPS eligible
clinician, which includes clinicians who are determined to be new
Medicare-enrolled eligible clinicians, QPs and Partial QPs, or do not
exceeded the low-volume threshold pertaining to the dollar value of
billed Medicare Part B allowed charges or Part B-enrolled beneficiary
count. In addition, section 1848(q)(1)(A) of the Act requires the
Secretary to permit any eligible clinician (as defined in section
1848(k)(3)(B) of the Act) who is not a MIPS eligible clinician the
option to volunteer to report on applicable measures and activities
under MIPS. Section 1848(q)(1)(C)(vi) of the Act clarifies that a MIPS
payment adjustment factor (or additional MIPS payment adjustment
factor) will not be applied to an individual who is not a MIPS eligible
clinician for a year, even if such individual voluntarily reports
measures under MIPS. For purposes of this section of the final rule
with comment period, we use the term ``MIPS payment adjustment'' to
refer to the MIPS payment adjustment factor (or additional MIPS payment
adjustment factor) as specified in section 1848(q)(1)(C)(vi) of the
Act.
To implement the MIPS program we must first establish and define a
MIPS eligible clinician in accordance with the statutory definition. We
proposed to define a MIPS eligible clinician at Sec. 414.1305 as a
physician (as defined in section 1861(r) of the Act), a physician
assistant, nurse practitioner, and clinical nurse specialist (as such
terms are defined in section 1861(aa)(5) of the Act), a certified
registered nurse anesthetist (as defined in section 1861(bb)(2) of the
Act), and a group that includes such professionals. In addition, we
proposed that QPs and Partial QPs who do not report data under MIPS,
low-volume threshold eligible clinicians, and new Medicare-enrolled
eligible clinicians as defined at Sec. 414.1305 would be excluded from
this definition per the statutory exclusions defined in section
1848(q)(1)(C)(ii) and (v) of the Act. We intend to consider using our
authority under section 1848(q)(1)(C)(i)(II) of the Act to expand the
definition of a MIPS eligible
[[Page 77037]]
clinician to include additional eligible clinicians (as defined in
section 1848(k)(3)(B) of the Act) through rulemaking in future years.
Additionally, in accordance with section 1848(q)(1)(A) and
(q)(1)(C)(vi) of the Act, we proposed to allow eligible clinicians who
are not MIPS eligible clinicians, as defined at proposed Sec.
414.1305, the option to voluntarily report measures and activities for
MIPS. We proposed at Sec. 414.1310(d) that those eligible clinicians
who are not MIPS eligible clinicians, but who voluntarily report on
applicable measures and activities specified under MIPS, would not
receive an adjustment under MIPS; however, they would have the
opportunity to gain experience in the MIPS program. We were
particularly interested in public comments regarding the feasibility
and advisability of voluntary reporting in the MIPS program for
entities such as RHCs and/or FQHCs, including comments regarding the
specific technical issues associated with reporting that are unique to
these health care providers. We anticipate some eligible clinicians
that will not be MIPS eligible clinicians during the first 2 years of
MIPS, such as physical and occupational therapists, clinical social
workers, and others that have been reporting quality measures under the
PQRS for a number of years, will want to have the ability to continue
to report and gain experience under MIPS. We requested comments on
these proposals.
The following is a summary of the comments we received regarding
our proposed definition of the term MIPS eligible clinician and our
proposal to allow eligible clinicians who are not MIPS eligible
clinicians the option to voluntarily report measures and activities for
MIPS.
Comment: Commenters supported the option for RHCs and FQHCs to
voluntary report, but noted that RHCs and FQHCs may not have experience
using EHR technology or the resources to invest in CEHRT and requested
that CMS adjust for the social determinants of health status.
Response: We appreciate the feedback on the role of socioeconomic
status in quality measurement. We continue to evaluate the potential
impact of social risk factors on measure performance. One of our core
objectives is to improve beneficiary outcomes, and we want to ensure
that complex patients as well as those with social risk factors receive
excellent care.
Comment: Several commenters expressed support for the proposed
definition of a MIPS eligible clinician and the proposal to allow
eligible clinicians who are not MIPS eligible to voluntarily report,
which encourages interdisciplinary and team-based services necessary to
address the full spectrum of patient and family needs and quality of
life concerns throughout the care continuum and across health system
and community-based care settings. One commenter expressed appreciation
for CMS using practitioner-neutral language and including nurse
practitioners.
Response: We appreciate the support from commenters.
Comment: In regard to the definition of a MIPS eligible clinician,
one commenter recommended that certified registered nurse anesthetists
be removed from the list of MIPS eligible clinicians because there are
not applicable measures for their job duties and they do not treat
diseases. Another commenter requested that CMS align the definition of
an eligible clinician in both the Medicare and Medicaid programs
because nurse practitioners do not qualify for the Medicare EHR
Incentive Program for Eligible Professionals, but do qualify for the
Medicaid EHR Incentive Program for Eligible Professionals. One
commenter expressed concern with the inclusion of nurse practitioners
and physician assistants in the definition of a MIPS eligible clinician
due to such providers needing to purchase and implement an EHR system
in a short timeframe and requested that CMS postpone the inclusion of
nurse practitioners and physician assistants.
Response: We appreciate the recommendations from the commenters and
note that section 1848(q)(1)(C)(i) of the Act defines a MIPS eligible
clinician, for the first and second MIPS payment years, as a physician
(as defined in section 1861(r) of the Act), a physician assistant,
nurse practitioner, clinical nurse specialist (as such terms are
defined in section 1861(aa)(5) of the Act), a certified registered
nurse anesthetist (as defined in section 1861(bb)(2) of the Act), and a
group that includes such professionals. We do not have discretion under
the statute to amend the definition of a MIPS eligible clinician by
excluding clinician types that the statute expressly includes, such as
certified registered nurse anesthetists, nurse practitioners, and
physician assistants. We note, however, that several policies may
alleviate the concerns of commenters regarding the availability of
applicable measures and activities, and health IT implementation costs.
For example, as discussed in section II.E.3.c. of this final rule with
comment period, we are finalizing a higher low-volume threshold to
ensure that MIPS eligible clinicians who do not exceed $30,000 of
billed Medicare Part B allowed charges or 100 Part B-enrolled Medicare
beneficiaries are excluded from MIPS. Also, we note that while non-
patient facing MIPS eligible clinicians are not exempt from
participating in MIPS or a performance category entirely, as discussed
in section II.E.1.b. of this final rule with comment period, we are
establishing a process that applies, to the extent feasible and
appropriate, alternative measures or activities for non-patient facing
MIPS eligible clinicians that fulfill the goals of the applicable
performance category. In addition, as discussed in section II.E.6.b.(2)
of this final rule with comment period, we may re-weight performance
categories if there are not sufficient measures applicable and
available to each MIPS eligible clinician to ensure that MIPS eligible
clinicians, including those who are non-patient facing, who do not have
sufficient alternative measures and activities that are applicable and
available in a performance category are scored appropriately.
In addition, we recognize that under MIPS, there will be more
eligible clinicians subject to the requirements of EHR reporting than
were previously eligible under the Medicare and/or Medicaid EHR
Incentive Program, including hospital-based MIPS eligible clinicians,
nurse practitioners, physician assistants, clinical nurse specialists,
and certified registered nurse anesthetists. Since many of these non-
physician clinicians are not eligible to participate in the Medicare
and/or Medicaid EHR Incentive Program, we have little evidence as to
whether there are sufficient measures applicable and available to these
types of MIPS eligible clinicians under our proposals for the advancing
care information performance category. As a result, we have provided
additional flexibilities to mitigate negative adjustments for the first
performance year (CY 2017) in order to allow hospital-based MIPS
eligible clinicians, nurse practitioners, physician assistants,
clinical nurse specialists, certified registered nurse anesthetists,
and other MIPS eligible clinicians to familiarize themselves with the
MIPS program. Section II.E.5.g.(8) of this final rule with comment
period describes our final policies regarding the re-weighting of the
advancing care information performance category within the final score,
in which we would assign a weight of zero when there are not sufficient
measures applicable and available.
Comment: One commenter requested for suppliers of portable x-ray
and
[[Page 77038]]
independent diagnostic testing facility services to be excluded from
the definition of a MIPS eligible clinician and recommended that CMS
create an alternate pathway allowing for adequate payment updates to
reflect the rising cost of care.
Response: We note that the MIPS payment adjustment applies only to
the amount otherwise paid under Part B with respect to items and
services furnished by a MIPS eligible clinician during a year. As
discussed in section II.E.7. of this final rule with comment period, we
will apply the MIPS adjustment at the TIN/NPI level. In regard to
suppliers of portable x-ray and independent diagnostic testing facility
services, we note that such suppliers are not themselves included in
the definition of a MIPS eligible clinician. However, there may be
circumstances in which a MIPS eligible clinician would furnish the
professional component of a Part B covered service that is billed by
such a supplier. For example, a radiologist who is a MIPS eligible
clinician could furnish the interpretation and report (professional
component) for an x-ray service, and the portable x-ray supplier could
bill for the global x-ray service (combined technical and professional
component) or bill separately for the professional component of the x-
ray service. In that case, the professional component (billed either on
its own or as part of the global service) could be considered a service
for which payment is made under Part B and furnished by a MIPS eligible
clinician. Those services could be subject to MIPS adjustment based on
the MIPS eligible clinician's performance during the applicable
performance period. Because, however, those services are billed by
suppliers that are not MIPS eligible clinicians, it is not
operationally feasible for us at this time to associate those billed
allowed charges with a MIPS eligible clinician at an NPI level in order
to include them for purposes of applying any MIPS payment adjustment.
Comment: One commenter indicated that the status of pathologists
working in independent laboratories is unclear with regard to the
definition of a MIPS eligible clinician and requested clarification as
to whether or not they would be included given that they were
considered EPs under PQRS.
Response: We note that pathologists, including pathologists
practicing in independent laboratories, are considered MIPS eligible
clinicians and thus, required to participate in MIPS and subject to the
MIPS payment adjustment. The MIPS payment adjustment applies only to
the amount otherwise paid under Part B with respect to items and
services furnished by a MIPS eligible clinician during a year, in which
we will apply the MIPS adjustment at the TIN/NPI level (see section
II.E.7. of this final rule with comment period). For items and services
furnished by a pathologist practicing in an independent laboratory that
are billed by the laboratory, such items and services may be subject to
MIPS adjustment based on the MIPS eligible clinician's performance
during the applicable performance period. For those billed Medicare
Part B allowed charges we are able to associate with a MIPS eligible
clinician at an NPI level, such items and services furnished by such
pathologist would be included for purposes of applying any MIPS payment
adjustment.
Comment: A few commenters encouraged CMS to expand the list of MIPS
eligible clinicians further to promote integrated care. One commenter
suggested that we include certified nurse midwives as MIPS eligible
clinicians. Another commenter encouraged CMS to ensure that specialists
can successfully participate in the MIPS. One commenter indicated that
MIPS accommodates the masses of physicians, but falls short in
including consulted clinicians. A few commenters requested that we
expand the definition of a MIPS eligible clinician to include
therapists, dieticians, social workers, and other Medicare Part B
suppliers as soon as possible in order for such clinicians to earn
positive MIPS payment adjustments. One commenter recommended that the
definition of MIPS eligible clinician be expanded to include all
Medicare supplier types, including ambulatory services.
Response: We appreciate the suggestions from the commenters and
will take them into account as we consider expanding the definition of
a MIPS eligible clinician for year 3 in future rulemaking. We interpret
the comment regarding consulted clinicians to refer to locum tenens and
clinicians contracted by a practice. We note that contracted clinicians
who meet the definition of a MIPS eligible clinician are required to
participate in MIPS. In regard to locum tenens clinicians, they bill
for the items and services they furnish using the NPI of the clinician
for whom they are substituting and, as such, do not bill Medicare in
their own right for the items and services they furnish. As such, locum
tenens clinicians are not MIPS eligible clinicians when they practice
in that capacity.
Comment: One commenter indicated that it is feasible to include
physical therapists in the expanded definition of a MIPS eligible
clinician given that physical therapists have been included in PQRS
since 2007. The commenter noted that there will be a negative impact on
the quality reporting rates of physical therapists if they are excluded
from MIPS in 2017 and 2018. Another commenter recommended that CMS
define provisions for physical therapists, occupational therapists, and
speech language pathologists as soon as possible in order to provide
sufficient time for building new systems for operation in year 3 of
MIPS. A few commenters requested clarification on how MIPS will apply
to physical therapists, occupational therapists, and speech language
pathologists working with Medicare beneficiaries. One commenter
suggested that therapists participating in MIPS should be scored using
the same scoring weights for the quality and cost performance
categories that apply to MIPS eligible clinicians in the first 2 years.
The commenter noted that the same transition scoring would be fair and
could mitigate severe penalties for clinicians new to MIPS.
Response: We appreciate the concerns and recommendations from the
commenters. In regard to expanding the definition of a MIPS eligible
clinician for year 3, we will consider the suggestions from the
commenters. We anticipate that some eligible clinicians who will not be
included in the definition of a MIPS eligible clinician during the
first 2 years of MIPS, such as physical and occupational therapists,
clinical social workers, and others that have been reporting quality
measures under the PQRS for a number of years, will want to have the
ability to continue to report and gain experience under MIPS. We note
that eligible clinicians who are not included in the definition of a
MIPS eligible clinician during the first 2 years of MIPS (or any
subsequent year) may voluntarily report on measures and activities
under MIPS, but will not be subject to the MIPS payment adjustment. We
do intend however to provide informative performance feedback to
clinicians who voluntarily report to MIPS, which would include the same
performance category and final score rules that apply to all MIPS
eligible clinicians. We believe this informational performance feedback
will help prepare those clinicians who voluntarily report to MIPS.
Comment: Some commenters requested that CMS allow facility-based
clinicians who provide outpatient services, such as physical
therapists, occupational therapists, and speech language pathologists,
to participate in MIPS and earn MIPS payment
[[Page 77039]]
adjustments by the third year of the program. One commenter expressed
concern that without inclusion in the Quality Payment Program, these
facility-based clinicians would be disadvantaged. Another commenter
expressed concern that the criteria for including non-physician
clinicians later in MIPS are not clear and recommended that clarity be
provided, including performance categories that are specific to each
specialty and type of practice.
Response: We appreciate the concerns and recommendations from the
commenters, and will take them into account as we consider expanding
the definition of a MIPS eligible clinician for year 3 in future
rulemaking.
Comment: One commenter did not support the expanding of the
definition of a MIPS eligible clinician in year 3. The commenter noted
that none of their physical therapists operate on the use of CEHRT and
switching in year 3 would require significant capital and personnel.
The commenter recommended postponing any expansion until year 4 or 5.
Response: We appreciate the commenter expressing concerns and
recognize that eligible clinicians and MIPS eligible clinicians will
have a spectrum of experiences with using EHR technology. As we
consider expanding the definition of a MIPS eligible clinician to
include additional eligible clinicians in year 3, we will consider how
such eligible clinicians would be scored for each performance category
in future rulemaking.
Comment: One commenter recommended that CMS convene a technical
expert panel of eligible clinicians who will not be included in the
definition of a MIPS eligible clinician during the first 2 years of
MIPS to help adapt the Quality Payment Program to their needs.
Response: We thank the commenter for the suggestion and will
consider the recommendation as we consider expanding the definition of
a MIPS eligible clinician to include additional eligible clinicians for
year 3 in future rulemaking and prepare for the operationalization of
the expanded definition. We are committed to continuously engage
stakeholders as we implement MIPS, and establish and operationalize
future policies.
Comment: One commenter expressed concern about the difficulties
hospital-based clinicians have had reporting under PQRS and recommended
offering hospital-based clinicians more flexibility in adopting MIPS.
Response: As previously noted, we recognize that there may not be
sufficient measures applicable and available for certain performance
categories for hospital-based MIPS eligible clinicians participating in
MIPS. In section II.E.5.g.(8)(a)(i) of this final rule with comment
period, we describe the re-weighting of the advancing care information
performance category when there are not sufficient measures applicable
and available for hospital-based MIPS eligible clinicians.
Comment: A few commenters expressed concerns that our MIPS
proposals focused on clinicians in large groups or who are hospital-
based and did not include non-physician clinicians. One commenter
requested that non-physician clinicians be recognized for their
critical role in the health delivery system and providing high quality,
low cost health care to the Medicare population.
Response: We disagree with the commenters and note that the
definition of a MIPS eligible clinician includes non-physician
clinicians such physician assistants, nurse practitioners, clinical
nurse specialists, and certified registered nurse anesthetists. As
previously noted, in future rulemaking, we will consider expanding the
definition of a MIPS eligible clinician to include additional eligible
clinicians starting in year 3.
Comment: A few commenters requested clarification regarding whether
or not Doctors of Chiropractic would be able to participate in MIPS.
Another commenter appreciated that Doctors of Chiropractic are included
as MIPS eligible clinicians, but believed that chiropractors would be
put at a severe disadvantage in participating in MIPS or APMs due to
CMS' restrictions on chiropractic coverage. The commenter encouraged
CMS to expand the billing codes for Doctors of Chiropractic to cover
the full scope of licensure.
Response: We note that chiropractors are included in the definition
of ``physician'' under section 1861(r) of the Act, and therefore, are
MIPS eligible clinicians. In regard to the comment pertaining to the
expansion of billing codes for chiropractors, we note that such comment
is out-of-scope given that we did not propose any billing code policies
in the proposed rule.
Comment: One commenter requested clarification on whether or not
participation in MIPS is mandatory.
Response: We note that clinicians who are included in the
definition of a MIPS eligible clinicians as defined in section
II.E.1.a. of this final rule with comment period are required to
participate in MIPS unless they are excluded from the definition of a
MIPS eligible clinician based on one of the three exclusions described
in sections II.E.3.a., II.E.3.b., and II.E.3.c. of this final rule with
comment period.
Comment: One commenter requested clarification on how CMS will
treat hospitalist services under MIPS, specifically, what measures will
they report, whether the hospital's PFS payment amount for the
hospitalists' services will be subject to the MIPS payment adjustment,
and how hospitalists should report data since they do not have an
office practice or an EHR to participate.
Response: We note that hospitalists are required to participate in
MIPS unless otherwise excluded. As discussed in section II.E.6.b.(2) of
this final rule with comment period, we may re-weight performance
categories if there are not sufficient measures applicable and
available to each MIPS eligible clinician to ensure that MIPS eligible
clinicians, including hospitalists, who do not have sufficient
alternative measures and activities that are applicable and available
in a performance category are scored appropriately. For hospitalists
who meet the definition of a hospital-based MIPS eligible clinician,
section II.E.5.g.(8)(a)(i) of this final rule with comment period
describes the re-weighting of the advancing care information
performance category within the final score, in which we would assign a
weight of zero when there are not sufficient measures applicable and
available for hospital-based MIPS eligible clinicians. In section
II.E.5.b.(5) of the proposed rule (81 FR 28192), we sought comment on
the application of additional system measures, which would directly
impact hospitalists, and intend to address such policies in future
rulemaking. Also, we note that the MIPS payment adjustment would be
applied to the Medicare Part B payments for items and services
furnished by a hospital-based MIPS eligible clinician.
Comment: Some commenters expressed concern regarding the exclusion
of pharmacists under MIPS and APMs, and indicated that the payment
models would prevent program goals from being met unless all
practitioners, including pharmacists, are effectively integrated into
team-based care. A few commenters noted that pharmacists are
medication-use experts in the health care system, and directly
contribute toward many of the quality measures under both MIPS and
Advanced APMs. Because pharmacists are neither MIPS eligible clinicians
nor required practitioners under APMs, pharmacist expertise and
contributions may be underutilized and/or unavailable to certain
patients. A few
[[Page 77040]]
commenters recommended that the definition of a MIPS eligible clinician
include pharmacists given that they are a critical part of a patient
care team, in which they can provide a broad array of services to
patients and have a role in optimizing patient health outcomes as the
number and complexity of medications continues to rise. One commenter
recommended that the Quality Payment Program include metrics and
payment methodologies that recognize services provided by pharmacists
and align with other CMS and CDC programs.
Response: We appreciate the suggestions from the commenters. We
note that we do not have discretion under the statute to include
clinicians who do not meet the definition of a MIPS eligible clinician.
Thus, pharmacists would not be able to participate in MIPS.
Comment: One commenter requested that CMS clarify whether or not
MIPS requirements would apply to clinicians who are not Medicare-
enrolled eligible clinicians. Another commenter expressed concern that
the proposed rule did not address how MIPS payment adjustments would be
applied for clinicians who are not Medicare-enrolled eligible
clinicians.
Response: We note that clinicians who are included in the
definition of a MIPS eligible clinician and not otherwise excluded are
required to report under MIPS. However, a clinician who is not included
in the definition of a MIPS eligible clinician can voluntarily report
under MIPS and would not be subject to the MIPS payment adjustment.
Also, we note that eligible clinicians who are not Medicare-enrolled
eligible clinicians are not required to participate in MIPS, and would
not be subject to the MIPS payment adjustment given that the MIPS
payment adjustment is applied to Medicare Part B payments for items and
services furnished by a MIPS eligible clinician.
Comment: One commenter requested information on how locum tenens
clinicians will be assessed under MIPS.
Response: As previously noted, locum tenens clinicians bill for the
items and services they furnish using the NPI of the clinician for whom
they are substituting and, as such, do not bill Medicare in their own
right for the items and services they furnish. As such, locum tenens
clinicians are not MIPS eligible clinicians when they practice in that
capacity.
Comment: One commenter noted that facility-based clinicians in
California face unique challenges under state law and recommended that
rather than automatically using an eligible clinician's facility's
performance as a proxy for the quality and cost performance categories
as proposed, CMS should develop a voluntary option to allow eligible
clinicians who meet criteria to be considered a facility-based
clinician.
Response: We appreciate the suggestions from the commenter and will
consider them as we develop policies for applying a facility's
performance to a MIPS eligible clinician or group.
Comment: One commenter suggested that the types of eligible
clinicians who are not included in the definition of a MIPS eligible
clinician in 2017 and who have been submitting PQRS measures for years,
should be allowed to voluntarily participate in 2017 and earn MIPS
payment adjustments if they complete a successful attestation.
Response: We thank the commenter for their suggestion and note that
clinicians not included in the definition of a MIPS eligible clinicians
have the option to voluntarily report on applicable measures and
activities under MIPS. However, the statute does not permit such
clinicians to be subject to the MIPS payment adjustment. Should we
expand the definition of a MIPS eligible clinician in future
rulemaking, such clinicians may be able to earn MIPS payment
adjustments beginning as early as the 2021 payment year.
Comment: A few commenters recommended that certified
anesthesiologist assistants be included in the definition of a MIPS
eligible clinician. One commenter stated that such inclusion would
provide the clarification that certified anesthesiologist assistants
are health care providers, increase the amount of quality reporting
under MIPS, and ensure certified anesthesiologist assistant
participation in APMs. The commenter noted that if certified
anesthesiologist assistants are not included in the definition of a
MIPS eligible clinician, patient access to care would be restricted.
Another commenter requested clarification regarding whether or not
anesthesiologist assistants would be excluded from MIPS reporting in
2017.
Response: We appreciate the suggestion from the commenters and note
that section 1861(bb)(2) of the Act specifies that the term ``certified
registered nurse anesthetist'' includes an anesthesiologist assistant.
Thus, anesthesiologist assistants are considered eligible for MIPS
beginning with the CY 2017 performance period.
Comment: One commenter requested that audiologists remain active
stakeholders in the MIPS implementation process, although they may not
be included in the program until year 3.
Response: We appreciate the recommendation from the commenter and
note that we are committed to actively engaging with all stakeholders
during the development and implementation of MIPS.
Comment: One commenter suggested that CPC+ clinicians should be
waived from MIPS if the group TIN is participating in CPC+.
Response: We appreciate the suggestion from the commenter, but note
that the exclusions in this final rule with comment period only pertain
to new Medicare-enrolled eligible clinicians, QPs and Partial QPs who
do not report on applicable MIPS measures and activities, and eligible
clinicians who do not exceed the low-volume threshold. We refer readers
to section II.E.5.h. of this final rule with comment period, which
describes the APM scoring standard for MIPS eligible clinicians
participating in MIPS APMs; such provisions are applicable to MIPS
eligible clinicians participating in CPC+.
Comment: One commenter requested that CMS allow psychiatrists who
participate in ACOs or who work at least 30 percent of their time in
eligible integrated care settings to opt out of the reporting
requirements to avoid a negative MIPS payment adjustment. Another
commenter recommended that CMS exempt from the definition of a MIPS
eligible clinician those clinicians participating in all Alternative
Payment Models defined in Category 3 of the HCPLAN Alternative Payment
Models Framework. The commenter indicated that the exemption should
include all upside-gain sharing only models defined in the Framework,
including patient-centered medical home models, bundled payment models,
and episode of care models.
Response: We note that the statute only allows for certain
exclusions for MIPS, two of which are for QPs and Partial QPs
participating in an APM or other innovative payment model is not in
itself sufficient for an eligible clinician to become a QP or Partial
QP. As described in section II.F. of this final rule with comment
period, only eligible clinicians who are identified on CMS-maintained
lists as participants in Advanced APMs and meet the relevant QP or
Partial QP threshold may become QPs or Partial QPs.
After consideration of the public comments we received, we are
finalizing the following policies. We are
[[Page 77041]]
finalizing the definition at Sec. 414.1305 of a MIPS eligible
clinician, as identified by a unique billing TIN and NPI combination
used to assess performance, as any of the following (excluding those
identified at Sec. 414.1310(b)): A physician (as defined in section
1861(r) of the Act), a physician assistant, nurse practitioner, and
clinical nurse specialist (as such terms are defined in section
1861(aa)(5) of the Act), a certified registered nurse anesthetist (as
defined in section 1861(bb)(2) of the Act), and a group that includes
such clinicians. We are finalizing our proposed policies at Sec.
414.1310(b) and (c) that QPs, Partial QPs who do not report on
applicable measures and activities that are required to be reported
under MIPS for any given performance period in a year, low-volume
threshold eligible clinicians, and new Medicare-enrolled eligible
clinicians as defined at Sec. 414.1305 are excluded from this
definition per the statutory exclusions defined in section
1848(q)(1)(C)(ii) and (v) of the Act. In accordance with section
1848(q)(1)(A) and (q)(1)(C)(vi) of the Act, we are finalizing our
proposal at Sec. 414.1310(b)(2) to allow eligible clinicians (as
defined at Sec. 414.1305) who are not MIPS eligible clinicians the
option to voluntarily report measures and activities for MIPS.
Additionally, we are finalizing our proposal at Sec. 414.1310(d) that
in no case will a MIPS payment adjustment apply to the items and
services furnished during a year by individual eligible clinicians, as
described in paragraphs (b) and (c) of this section, who are not MIPS
eligible clinicians including eligible clinicians who are not MIPS
eligible clinicians, but who voluntarily report on applicable measures
and activities specified under MIPS.
b. Non-Patient Facing MIPS Eligible Clinicians
Section 1848(q)(2)(C)(iv) of the Act requires the Secretary, in
specifying measures and activities for a performance category, to give
consideration to the circumstances of professional types (or
subcategories of those types determined by practice characteristics)
who typically furnish services that do not involve face-to-face
interaction with a patient. To the extent feasible and appropriate, the
Secretary may take those circumstances into account and apply
alternative measures or activities that fulfill the goals of the
applicable performance category to such non-patient facing MIPS
eligible clinicians. In carrying out these provisions, we are required
to consult with non-patient facing MIPS eligible clinicians.
In addition, section 1848(q)(5)(F) of the Act allows the Secretary
to re-weight MIPS performance categories if there are not sufficient
measures and activities applicable and available to each type of MIPS
eligible clinician. We assume many non-patient facing MIPS eligible
clinicians will not have sufficient measures and activities applicable
and available to report under the performance categories under MIPS. We
refer readers to section II.E.6.b.(2) of this final rule with comment
period for the discussion regarding how we addressed performance
categories weighting for MIPS eligible clinicians for whom no measures
exist in a given category.
To establish policies surrounding non-patient facing MIPS eligible
clinicians, we must first define the term ``non-patient facing.''
Currently, the PQRS, VM, and Medicare EHR Incentive Program include two
existing policies for considering whether an EP is providing patient-
facing services. To determine, for purposes of PQRS, whether an EP had
a ``face-to-face'' encounter with Medicare patients, we assess whether
the EP billed for services under the PFS that are associated with face-
to-face encounters, such as whether an EP billed general office visit
codes, outpatient visits, and surgical procedures. Under PQRS, if an EP
bills for at least one service under the PFS during the performance
period that is associated with face-to-face encounters and reports
quality measures via claims or registries, then the EP is required to
report at least one ``cross-cutting'' measure. EPs who do not meet
these criteria are not required to report a cross-cutting measure. For
the purposes of PQRS, telehealth services have not historically been
included in the definition of face-to-face encounters. For more
information, please see the CY 2016 PFS final rule for these
discussions (80 FR 71140).
In the Stage 2 final rule (77 FR 54098 through 54099), the Medicare
EHR Incentive Program established a significant hardship exception from
the meaningful use payment adjustment under section 1848(a)(7)(A) of
the Act for EPs that lack face-to-face interactions with patients and
those who lack the need to follow-up with patients. EPs with a primary
specialty of anesthesiology, pathology or radiology listed in the
Provider Enrollment, Chain, and Ownership System (PECOS) as of 6 months
prior to the first day of the payment adjustment year automatically
receive this hardship exemption (77 FR 54100). Specialty codes
associated with these specialties include 05-Anesthesiology, 22-
Pathology, 30-Diagnostic Radiology, 36-Nuclear Medicine, 94-
Interventional Radiology. EPs with a different specialty are also able
to request this hardship exception through the hardship application
process. However, telehealth services could be counted by EPs who
choose to include these services within the definition of ``seen by the
EP'' for the purposes of calculating patient encounters with the EHR
Incentive Program (77 FR 53982).
In the MIPS and APMs RFI (80 FR 63484), we sought comments on MIPS
eligible clinicians that should be considered non-patient facing MIPS
eligible clinicians and the criteria we should use to identify these
MIPS eligible clinicians. Commenters were split when it came to
defining and identifying non-patient facing MIPS eligible clinicians.
Many took a specialty-driven approach. Commenters generally did not
support use of specialty codes alone, which is the approach used by the
Medicare EHR Incentive Program. Commenters indicated that these codes
do not necessarily delineate between the same specialists who may or
may not have patient-facing interaction. One example is cardiologists
who specialize in cardiovascular imaging which is also coded as
cardiology. On the other hand, as one commenter mentioned, physicians
with specialty codes other than ``cardiology'' (for example, internal
medicine) may perform cardiovascular imaging services. Therefore, using
the specialty code for cardiology to identify clinicians who typically
do not provide patient-facing services would be both over-inclusive and
under-inclusive. Other commenters identified specialty types that they
believe should be considered non-patient facing MIPS eligible
clinicians. Specific specialty types included radiologists,
anesthesiologists, nuclear cardiology or nuclear medicine physicians,
and pathologists. Others pointed out that certain MIPS eligible
clinicians may be primarily non-patient facing MIPS eligible clinicians
even though they practice within a traditionally patient-facing
specialty. The MIPS and APMs RFI comments and listening sessions with
medical societies representing non-patient facing MIPS eligible
clinicians specified radiology/imaging, anesthesiology, nuclear
cardiology and oncology, and pathology as inclusive of non-patient
facing MIPS eligible clinicians. Commenters noted that roles within
specific types of specialties may need to be further delineated between
patient-facing and non-patient facing
[[Page 77042]]
MIPS eligible clinicians. An illustrative list of specific types of
clinicians within the non-patient facing spectrum include:
Pathologists who may be primarily dedicated to working
with local hospitals to identify early indicators related to evolving
infectious diseases;
Radiologists who primarily provide consultative support
back to a referring physician or provide image interpretation and
diagnosis versus therapy;
Nuclear medicine physicians who play an indirect role in
patient care, for example as a consultant to another physician in
proper dose administration; or
Anesthesiologists who are primarily providing supervision
oversight to Certified Registered Nurse Anesthetists.
After reviewing current policies, we proposed to define a non-
patient facing MIPS eligible clinician for MIPS at Sec. 414.1305 as an
individual MIPS eligible clinician or group that bills 25 or fewer
patient-facing encounters during a performance period. We considered a
patient-facing encounter as an instance in which the MIPS eligible
clinician or group billed for services such as general office visits,
outpatient visits, and procedure codes under the PFS. We intend to
publish the list of patient-facing encounter codes on a CMS Web site
similar to the way we currently publish the list of face-to-face
encounter codes for PQRS. This proposal differs from the current PQRS
policy in two ways. First, it creates a minimum threshold for the
quantity of patient-facing encounters that MIPS eligible clinicians or
groups would need to furnish to be considered patient-facing, rather
than classifying MIPS eligible clinicians as patient-facing based on a
single patient-facing encounter. Second, this proposal includes
telehealth services in the definition of patient-facing encounters.
We believed that setting the non-patient facing MIPS eligible
clinician threshold for individual MIPS eligible clinician or group at
25 or fewer billed patient-facing encounters during a performance
period is appropriate. We selected this threshold based on an analysis
of non-patient facing Healthcare Common Procedure Coding System (HCPCS)
codes billed by MIPS eligible clinicians. Using these codes and this
threshold, we identified approximately one quarter of MIPS eligible
clinicians as non-patient facing before MIPS exclusions, such as low-
volume and newly-enrolled eligible clinician policies, were applied.
The majority of clinicians enrolled in Medicare with specialties such
as anesthesiology, nuclear medicine, and pathology were identified as
non-patient facing in this analysis. The addition of telehealth to the
analysis did not affect the outcome, as it created a less than 0.01
percent change in MIPS eligible clinicians categorized as non-patient
facing.
Therefore, the proposed approach allows the definition of non-
patient facing MIPS eligible clinicians, to include both MIPS eligible
clinicians who practice within specialties traditionally considered
non-patient facing, as well as MIPS eligible clinicians who provide
occasional patient-facing services that do not represent the bulk of
their practices. This definition is also consistent with the statutory
requirement that refers to professional types who typically furnish
services that do not involve patient-facing interaction with a patient.
In response to the MIPS and APMs RFI, some commenters believed that
MIPS eligible clinicians should be defined as non-patient facing MIPS
eligible clinicians based on whether their billing indicates they
provide face-to-face services. Commenters indicated that the use of
specific HCPCS codes in combination with specialty codes, may be a more
appropriate way to identify MIPS eligible clinicians that have no
patient interaction.
We also proposed to include telehealth services in the definition
of patient-facing encounters. Various MIPS eligible clinicians use
telehealth services as an innovative way to deliver care to
beneficiaries and we believe these services, while not furnished in-
person, should be recognized as patient-facing. In addition, Medicare
eligible telehealth services substitute for an in-person encounter and
meet other site requirements under the PFS as defined at Sec. 410.78.
The proposed addition of the encounter threshold for patient-facing
MIPS eligible clinicians was intended to minimize concerns that a MIPS
eligible clinician could be misclassified as patient-facing as a result
of providing occasional telehealth services that do not represent the
bulk of their practice. Finally, we believed that this proposed
definition of a non-patient facing MIPS eligible clinician for MIPS
could be consistently used throughout the MIPS program to identify
those MIPS eligible clinicians for whom certain proposed requirements
for patient-facing MIPS eligible clinicians (such as reporting cross-
cutting measures) may not be meaningful.
We weighed several options when considering the appropriate
definition of non-patient facing MIPS eligible clinicians for MIPS; and
some options were similar to those we considered in implementing the
Medicare EHR Incentive Program. One option we considered was basing the
non-patient facing MIPS eligible clinician's definition on a set
percentage of patient-facing encounters, such as 5 to 10 percent, that
was tied to the same list of patient-facing encounter codes discussed
in this section of this final rule with comment period. Another option
we considered was the identification of non-patient facing MIPS
eligible clinicians for MIPS only by specialty, which might be a
simpler approach. However, we did not consider this approach sufficient
for identifying all the possible non-patient facing MIPS eligible
clinicians, as some patient-facing MIPS eligible clinicians practice in
multi-specialty practices with non-patient facing MIPS eligible
clinician's practices with different specialties. We would likely have
had to develop a separate process to identify non-patient facing MIPS
eligible clinicians in other specialties, whereas maintaining a single
definition that is aligned across performance categories is simpler.
Many comments from the MIPS and APMs RFI discouraged use of specialty
codes alone. Additionally, we believed our proposal would allow us to
more accurately identify MIPS eligible clinicians who are non-patient
facing by applying a threshold to recognize that a MIPS eligible
clinician who furnishes almost exclusively non-patient facing services
should be treated as a non-patient facing MIPS eligible clinician
despite furnishing a small number of patient-facing services.
In the MIPS and APMs RFI (80 FR 63484), we also requested comments
on what types of measures and/or improvement activities (new or from
other payment systems) we should use to assess non-patient facing MIPS
eligible clinicians' performance and how we should apply the MIPS
performance categories to non-patient facing MIPS eligible clinicians.
Commenters were split on these subjects. A number of commenters stated
that non-patient facing MIPS eligible clinicians should be exempt from
specific performance categories under MIPS or should be exempt from
MIPS as a whole. Commenters who did not favor exemptions generally
suggested that we focus on process measures and work with specialty
societies to develop new, more clinically relevant measures for non-
patient facing MIPS eligible clinicians.
We took these stakeholder comments into consideration. We note that
section 1848(q)(2)(C)(iv) of the Act does not
[[Page 77043]]
grant the Secretary discretion to exempt non-patient facing MIPS
eligible clinicians from a performance category entirely, but rather to
apply to the extent feasible and appropriate alternative measures or
activities that fulfill the goals of the applicable performance
category. However, we have placed safeguards to ensure that MIPS
eligible clinicians, including those who are non-patient facing, who do
not have sufficient alternative measures that are applicable and
available in a performance category are scored appropriately. We
proposed to apply the Secretary's authority under section 1848(q)(5)(F)
of the Act to re-weight such performance categories score to zero if
there is no performance category score or to lower the weight of the
quality performance category score if there are not at least three
scored measures. Please refer to section II.E.6.b.(2)(b) in the
proposed rule for details on the re-weighting proposals. Accordingly,
we proposed alternative requirements for non-patient facing MIPS
eligible clinicians across the proposed rule (see sections II.E.5.b.,
II.E.5.e., and II.E.5.f. of the proposed rule for more details). While
non-patient facing MIPS eligible clinicians will not be exempt from any
performance category under MIPS, we believe these alternative
requirements fulfill the goals of the applicable performance categories
and are in line with the commenters' desire to ensure that non-patient
facing MIPS eligible clinicians are not placed at an unfair
disadvantage under the new program. The requirements also build on
prior program components in meaningful ways and are meant to help us
appropriately assess and incentivize non-patient facing MIPS eligible
clinicians. We requested comments on these proposals.
The following is a summary of the comments we received regarding
our proposal that defines non-patient facing MIPS eligible clinicians
for MIPS as an individual MIPS eligible clinician or group that bills
25 or fewer patient-facing encounters (including telehealth services)
during a performance period.
Comment: A few commenters supported the proposed definition of non-
patient facing MIPS eligible clinicians.
Response: We appreciate the support from commenters.
Comment: One commenter requested that pathologists (as identified
in PECOS) be automatically identified as non-patient facing MIPS
eligible clinicians at the beginning of each year. The commenter noted
that it seems reasonable to use PECOS to identify non-patient facing
specialties.
Response: We appreciate the commenter expressing the importance for
MIPS eligible clinicians to be identified as non-patient facing MIPS
eligible clinicians at the beginning of each year. We believe that it
would be beneficial for individual MIPS eligible clinicians and groups
to know in advance of a performance period whether or not they qualify
as a non-patient facing MIPS eligible clinician. For purposes of this
section, we are coining the term ``non-patient facing determination
period'' to refer to the timeframe used to assess claims data for
making eligibility regarding non-patient facing status. We define the
non-patient facing determination period to mean a 24-month assessment
period, which includes a two-segment analysis of claims data regarding
patient-facing encounters during an initial 12-month period prior to
the performance period followed by another 12-month period during the
performance period.
The initial 12-month segment of the non-patient facing
determination period would span from the last 4 months of a calendar
year 2 years prior to the performance period followed by the first 8
months of the next calendar year and include a 60-day claims run out,
which will allow us to inform eligible clinicians and groups of their
non-patient status during the month (December) prior to the start of
the performance period. We believe that the initial non-patient facing
determination period enables us to make eligibility determinations
based on 12 months of data that is as close to the performance period
as possible while informing eligible clinicians of their non-patient
facing status prior to the performance period. The second 12-month
segment of the non-patient facing determination period would span from
the last 4 months of a calendar year 1 year prior to the performance
period followed by the first 8 months of the performance period in the
next calendar year and include a 60-day claims run out, which will
allow us to inform additional eligible clinicians and groups of their
non-patient status during the performance period.
Thus, for purposes of the 2019 MIPS payment adjustment, we will
initially identify individual eligible clinicians and groups who are
considered non-patient facing MIPS eligible clinicians based on 12
months of data starting from September 1, 2015 to August 31, 2016. In
order to account for the identification of additional individual
eligible clinicians and groups that may qualify as non-patient facing
during the 2017 performance period, we will conduct another eligibility
determination analysis based on 12 months of data starting from
September 1, 2016 to August 31, 2017.
Comment: One commenter requested that CMS consider allowing
physicians in other specialties to declare by exception that they
deserve a similar exemption as those that are identified in the
proposed rule as non-patient facing MIPS eligible clinicians, which can
be confirmed by CMS through coding analysis.
Response: We disagree with the approach described by the commenter
because the statute does not provide discretion in establishing
exclusions other than the three exclusions specified in section II.E.3.
of this final rule with comment period. Also, we note that non-patient
facing MIPS eligible clinicians are identified based on an analysis we
conduct using claims data to determine such status; this is not a
status that clinicians make an election for purposes of MIPS.
Comment: Many commenters expressed concerns that the threshold set
forth in the proposed definition of a non-patient facing MIPS eligible
clinician (for example, an individual MIPS eligible clinician or group
that bills 25 or fewer patient-facing encounters during a performance
period) was too low. The commenters believed that many clinicians in
certain specialties would be classified as patient-facing even though
clinicians in those specialties are predominately non-patient facing.
One commenter stated that MIPS eligible clinicians with such a low
number of patient-facing encounters may not realize they would be
considered patient-facing and subject to additional reporting
requirements. Many commenters recommended alternative options for
establishing a threshold relating to the billing of patient-facing
encounters, including the following: A threshold of 50 or fewer
patient-facing encounters; a threshold of 100 or fewer patient-facing
encounters, which would represent a somewhat larger portion of the MIPS
eligible clinician's practice, averaging approximately two patient-
facing encounters per week; and a threshold of 150 or fewer billed
Medicare patient-facing encounters. Other commenters suggested that CMS
consider automatically designating certain specialties, such as
anesthesiology or radiology, as non-patient facing unless a clinician
in such specialty bills more than 100 patient-facing encounters. One
commenter suggested that CMS base the threshold on a percentage of
patients seen (for example, 80 percent of services furnished are
determined to be non-patient facing) or claims or allowed
[[Page 77044]]
charges (for example, 85 percent of claims or charges are for non-
patient facing services), or a combination of the two percentage-based
options.
Response: We thank the commenters for expressing their concerns and
recommendations regarding the proposed threshold used to define a non-
patient facing MIPS eligible clinician. Based on the comments
indicating that the proposed threshold would misclassify certain
specialties that are predominately non-patient facing, and in order to
more accurately identify MIPS eligible clinicians who are non-patient
facing, we are modifying our proposal and increasing the threshold to
determine when a MIPS eligible clinician is considered non-patient
facing. Therefore, we are finalizing a modification to our proposal to
define a non-patient facing MIPS eligible clinician as an individual
MIPS eligible clinician that bills 100 or fewer patient-facing
encounters (including Medicare telehealth services defined in section
1834(m) of the Act) during the non-patient facing determination period,
and a group provided that more than 75 percent of the NPIs billing
under the group's TIN meet the definition of a non-patient facing
individual MIPS eligible clinician during the non-patient facing
determination period. We believe that the 100 or fewer billed patient-
facing encounters as a threshold more accurately reflects a
differentiation of annual patient-facing encounters between MIPS
eligible clinicians who furnish a majority of patient-facing services
and considered patient-facing and MIPS eligible clinicians who provide
occasional patient-facing services that do not reflect the bulk of
services provided by the practice or would traditionally be considered
non-patient facing. This modified threshold that applies at the
individual level would reduce the risk of identifying individual MIPS
eligible clinicians as patient-facing who would otherwise be considered
non-patient facing. Similarly, the modified threshold that applies at
the group level as previously noted, would reduce the risk of
identifying groups as patient-facing that would otherwise be considered
non-patient facing. Also, we considered increasing the threshold based
on different approaches. As previously described, one option was basing
the definition of a non-patient facing MIPS eligible clinician on a set
percentage of patient-facing encounters, such as 5 to 10 percent, that
was tied to the same list of patient-facing encounter codes discussed
in this section of the final rule with comment period. We did not
pursue this approach because a percentage would not apply consistency,
which could miscategorize MIPS eligible clinicians who would otherwise
be considered patient-facing. Another option we considered was the
identification of non-patient facing MIPS eligible clinicians only by
specialty, which might be a simpler approach. However, we did not
consider this approach sufficient for identifying all the possible non-
patient facing MIPS eligible clinicians, as some patient-facing MIPS
eligible clinicians practice in multi-specialty practices with non-
patient facing MIPS eligible clinician's practices with different
specialties. We would likely have had to develop a separate process to
identify non-patient facing MIPS eligible clinicians in other
specialties, whereas maintaining a single definition that is aligned
across performance categories is simpler. Thus, we did not modify our
approach along these lines.
Comment: In regard to the illustrative list of specific types of
clinicians within the non-patient facing spectrum outlined in the
proposed rule, one commenter requested that CMS remove the reference to
anesthesiologist supervision and ensure that the Quality Payment
Program would not impose any unnecessary supervision. The commenter
noted that physician supervision of nurse anesthetists did not improve
care outcomes and was therefore unnecessary. Another commenter stated
that most anesthesiologists should be designated as non-patient facing
and recommended that CMS reconsider the non-patient facing
determination criteria while another commenter requested that CMS
ensure the equal treatment of certified registered nurse anesthetists
and anesthesiologists when determining who qualifies as a non-patient
facing MIPS eligible clinician. One commenter suggested that CMS
publish the list of patient-facing services as quickly as possible in
order for anesthesiologists to determine if they are considered non-
patient facing MIPS eligible clinicians. The commenter requested that
CMS provide details on how it estimated that a majority of
anesthesiologists would qualify as non-patient facing.
Response: We appreciate the suggestions from commenters regarding
the types of MIPS eligible clinicians to be considered non-patient
facing. We want to clarify that our proposed definition of a non-
patient facing MIPS eligible clinician did not include the
identification of any specific type of physician or clinician
specialty, and note that the statutory definition of an
anesthesiologist does not specify a supervision requisite as a
requirement. However, our proposed definition of a non-patient facing
MIPS eligible clinician is based on a methodology that would allow us
to more accurately identify MIPS eligible clinicians who are non-
patient facing by applying a threshold to recognize that a MIPS
eligible clinician who furnishes almost exclusively non-patient facing
services should be treated as a non-patient facing MIPS eligible
clinician despite furnishing a small number of patient-facing services.
Our methodology used to identify non-patient facing MIPS eligible
clinicians included a quantitative, comparative analysis of claims and
HCPCS code data. Contrary to the commenter's belief, we believe that
our proposed definition of a non-patient facing clinician would not
capture the majority of MIPS eligible clinicians or groups within
specialties such as anesthesiology, pathology, radiology, and nuclear
medicine who may provide a small portion of services that would be
considered patient-facing, but would otherwise be considered non-
patient facing MIPS eligible clinicians. As a result of this dynamic,
we are finalizing a modification to our proposed definition of a non-
patient facing MIPS eligible clinician. As previously noted, we will
identify MIPS eligible clinicians who are considered non-patient facing
in advance of the performance period.
Comment: One commenter requested that MIPS eligible clinicians
within the interventional pain management specialty be exempt from
negative, but not positive, MIPS payment adjustments. The commenter
noted that MIPS will destroy independent practices and increase the
costs of Medicare, making Medicare insolvent even sooner than expected.
Response: We thank the commenter for the suggestion. We note that
the statute does not grant the Secretary discretion to exclude non-
patient facing MIPS eligible clinicians from the requirement to
participate in MIPS. However, non-patient facing MIPS eligible
clinicians will benefit from other policies that we are finalizing
throughout this final rule with comment period such as reduced
performance requirements and lower performance threshold. Accordingly,
we describe alternative requirements for non-patient facing MIPS
eligible clinicians across this final rule with comment period (see
sections II.E.5.b., II.E.5.e., and II.E.5.f. of this final rule with
comment period for more details). We disagree with the comment
regarding MIPS negatively impacting independent practices. We
[[Page 77045]]
believe that independent practices will benefit from other policies
that we are finalizing throughout this final rule with comment period
such as reduced performance requirements and lower performance
threshold.
Comment: One commenter requested that CMS abandon the term ``non-
patient facing'' in reference to MIPS eligible clinicians or physician
specialties. The commenter indicated that the patient-facing/non-
patient facing terminology is appropriate for describing the Current
Procedural Terminology (CPT) code, but not appropriated for describing
a clinician relative to quality improvement. Another commenter
recommended that CMS consider an alternative term to ``non-patient
facing'' as it applies to anesthesiologists. One commenter expressed
concern that the term non-patient facing diminishes the importance of
specialists.
Response: We appreciate the commenters expressing their concerns
regarding the use of the term ``non-patient facing'' and as a result of
the concerns from commenters, we are interested in obtaining further
input from stakeholders regarding potential terms that could be used to
describe ``non-patient facing'' under MIPS. Therefore, we are seeking
additional comment on modifying the terminology used to reference
``non-patient facing'' MIPS eligible clinicians for future
consideration. What alternative terms could be used to describe ``non-
patient facing''?
Comment: One commenter indicated that the proposed definition of
non-patient facing clinicians is overly stringent and does not
recognize a number of ``hybrid'' physicians such as nuclear
cardiologists, who split time between patient-facing and non-patient
facing activity. The commenter requested an alternative pathway for
``hybrid'' physicians in order for nuclear cardiologists and others to
successfully participate in MIPS, which is important for medical
specialists with no alternative payment models. As an interim solution,
the commenter requested that the reporting period be shortened and be
flexibility for MIPS eligible clinicians to select the reporting period
within the applicable calendar year.
Response: We thank the commenter for expressing concerns and
recognize that MIPS eligible clinicians in certain specialties may not
have a majority of their services categorized as non-patient facing. We
want to ensure that MIPS eligible clinicians, including non-patient
facing MIPS eligible clinicians are able to participate in MIPS
successfully and thus, in this final rule with comment period, we not
only establish requirements for MIPS eligible clinicians in each
performance category, but we apply, to the extent feasible and
appropriate, alternative measures or activities that fulfill the goals
of each performance category. In sections II.E.5.b., II.E.5.e., and
II.E.5.f. of this final rule with comment period, we describe the
alternative requirements for non-patient facing MIPS eligible
clinicians. Also, as described in section II.E.4. of this final rule
with comment period, we are finalizing a modification to the MIPS
performance period to be a minimum of one continuous 90-day period
within CY 2017.
Comment: Several commenters indicated that the definition of a non-
patient facing MIPS eligible clinician is inadequate since the
definition is dependent on the codes that define patient-facing
encounters, which are not yet available. The commenters requested that
CMS provide the applicable CPT codes as soon as possible in order for
affected MIPS eligible clinicians to have sufficient time to assess the
alignment of the codes. One commenter recommended that only evaluation
and management services (the denominators of the cross-cutting measures
as specified in Table C: Proposed Individual Quality Cross-Cutting
Measures for the MIPS to Be Available to Meet the Reporting Criteria
Via Claims, Registry, and EHR Beginning in 2017 of the proposed rule
(81 FR 28447 through 28449)) be considered when determining whether a
MIPS eligible clinician provides face-to-face services. The commenter
indicated that the inclusion of other services, particularly 000 global
codes, will inappropriately classify many radiologists as patient-
facing and put small and rural practices at a distinct disadvantage.
Response: We thank the commenters for their support and expressing
their concerns. While we did not propose specific patient-facing
encounter codes in the proposed rule, we considered a patient-facing
encounter to be an instance in which the MIPS eligible clinician or
group billed for items and services furnished such as general office
visits, outpatient visits, and procedure codes under the PFS. We agree
with the commenters that a non-patient facing MIPS eligible clinician
is identified based on the evaluation and management of services, which
reflects the list of patient-facing encounter codes. We note that the
denominators, as specified in Table C of the proposed rule, used for
determining the non-patient facing status of MIPS eligible clinicians
are the same as the denominators of the cross-cutting measures. Based
on our experience with PQRS, we believe that the use of patient-facing
encounter codes is the most appropriate approach for determining
whether or not MIPS eligible clinicians are non-patient facing. We
intend to publish a list of patient-facing encounters on the CMS Web
site located at QualityPaymentProgram.cms.gov.
In regard to the comment pertaining to misclassification, we note
that the definition of non-patient facing MIPS eligible clinicians
creates a minimum threshold for the quantity of patient-facing
encounters that MIPS eligible clinicians or groups would need to
furnish to be considered patient-facing, rather than classifying MIPS
eligible clinicians as patient-facing based on a single patient-facing
encounter. This approach allows for the definition of non-patient
facing MIPS eligible clinicians to include both MIPS eligible
clinicians who practice within specialties traditionally considered
non-patient facing as well as MIPS eligible clinicians who provide
occasional patient-facing services that do not represent the bulk of
their practices. We believe our modified policy will allow us to more
accurately identify MIPS eligible clinicians who are non-patient facing
by applying a threshold in recognition of the fact that a MIPS eligible
clinician who furnishes almost exclusively non-patient facing services
should be treated as a non-patient facing MIPS eligible clinician
despite furnishing a small number of patient-facing services.
Comment: One commenter requested clarification on whether or not
the definition of a patient-facing encounter includes procedures such
as peripheral nerve blocks (64400-64530) and epidural injections
(62310-62319).
Response: We intend to publish the list of patient-facing
encounters on the CMS Web site located at
QualityPaymentProgram.cms.gov, which will include procedures such as
peripheral nerve blocks (64400-64530) and epidural injections (62310-
62319).
Comment: One commenter requested that CMS justify how 25 or fewer
patient-facing encounters was determined as the threshold for non-
patient facing MIPS eligible clinicians.
Response: As previously noted, we believed that setting the non-
patient facing MIPS eligible clinician threshold for individual MIPS
eligible clinician or group at 25 or fewer billed patient-facing
encounters during a performance period was appropriate. We selected
this threshold based on an analysis of
[[Page 77046]]
non-patient facing HCPCS codes billed by MIPS eligible clinicians.
Using these codes and this threshold, we determined that approximately
one quarter of MIPS eligible clinicians would be identified as non-
patient facing before MIPS exclusions, such as the low-volume threshold
and new Medicare-enrolled eligible clinician policies, were applied.
Based on our analysis, a significant portion of clinicians enrolled in
Medicare with specialties such as anesthesiology, nuclear medicine, and
pathology were identified as non-patient facing in this analysis. We
believe that our approach allows the definition of non-patient facing
MIPS eligible clinicians, to include both MIPS eligible clinicians who
practice within specialties traditionally considered non-patient
facing, as well as MIPS eligible clinicians who provide occasional
patient-facing services that do not represent the bulk of their
practices.
However, as discussed above, we are finalizing a modification to
our proposal to define a non-patient facing MIPS eligible clinician as
an individual MIPS eligible clinician that bills 100 or fewer patient-
facing encounters (including Medicare telehealth services defined in
section 1834(m) of the Act) during the non-patient facing determination
period, and a group provided that more than 75 percent of the NPIs
billing under the group's TIN meet the definition of a non-patient
facing individual MIPS eligible clinician during the non-patient facing
determination period. When we applied our prior methodology to make
determinations at the group level, the percentage of MIPS eligible
clinicians classified as non-patient facing at the group level was
higher because at the group level, MIPS eligible clinicians with less
than 100 encounters who would otherwise be considered patient-facing
(for example, pediatricians) are included in the group level
calculation for the non-patient facing determination. Thus, there would
be more specialists classified as non-patient facing when we make
determinations at the group level, particularly when the percentage of
specialists identified as non-patient facing at the group level is
compared to the overall percentage of individual MIPS eligible
clinicians. We note that the reason for the increase in the number of
non-patient facing determinations is due to individual MIPS eligible
clinicians in groups who have with less than 100 encounters would be
classified as non-patient facing and would otherwise be considered
patient-facing.
Comment: Several commenters disagreed with CMS's proposal to apply
the same billing threshold for patient-facing encounters to both
individual MIPS eligible clinicians and groups. One commenter noted
that such a policy would force groups of non-patient facing MIPS
eligible clinicians to be required to report on inapplicable outcomes
and cross-cutting measures if several individuals' rare face-to-face
patient encounters are summed as a group (for example, a group of 10
physicians with 2 to 3 face-to-face patient encounters per year per
MIPS eligible clinician). Another commenter specifically indicated that
if the proposed non-patient facing threshold is applied at a group
level, specialties such as diagnostic radiology, pathology, nuclear
medicine, and anesthesiology would be considered patient-facing even
though practices in these specialties could be considered non-patient
facing if evaluated individually.
A few commenters indicated that when the proposed threshold is
applied to groups without scaling the threshold by the number of
clinicians in a group, a single individual clinician could push the
entire group into the patient-facing category, even if the other
individual clinicians in the group would, otherwise, be considered non-
patient facing. One commenter indicated that the proposed definition of
a non-patient facing MIPS eligible clinician would impact small and
rural practices whose general radiologists perform more interventional
procedures even though such patient-facing encounters represent only a
very small fraction of the group's total Medicare services.
Several commenters provided alternative options for determining how
the definition of non-patient facing MIPS eligible clinicians could be
applied to groups. One commenter suggested scaling the patient-facing
encounter threshold by the number of clinicians in a group practice
while another commenter suggested doing so by patient-facing encounter
codes. A few other commenters recommended one or more of the following
alternatives: (1) Apply a patient-facing encounter threshold that is
proportional to the group size, and, for non-patient facing MIPS
eligible clinicians who meet the definition, identify such MIPS
eligible clinicians at the beginning of the performance year; (2)
classify groups based on whether the majority of individual MIPS
eligible clinicians meet the threshold; (3) compare a group's average
number of patient-facing encounters to the threshold, where a group's
average would be defined by the total number of patient-facing
encounters billed by the group divided by the number of MIPS eligible
clinicians in the group and as a result, would not be skewed by a few
MIPS eligible clinicians; or (4) redefine a non-patient facing MIPS
eligible clinician by using the threshold of 50 or fewer patient-facing
encounters per individual such that, if 51 percent or more members of
the group individually fall below the threshold, then the entire group
is considered non-patient facing.
Response: We thank the commenters for expressing their concerns
regarding the proposed definition of a non-patient facing MIPS eligible
clinician. Based on the comments received, we recognize that having a
similar threshold applied at the individual and group levels would
inadvertently identify groups composed of certain specialties or multi-
specialties as patient-facing that would traditionally be considered
non-patient facing or provide occasional patient-facing services that
do not represent the bulk of their group. Thus, we are modifying our
proposed definition of a non-patient facing MIPS eligible clinician to
establish two separate thresholds that apply at the individual and
group level.
Specifically, we are modifying our proposal to define a non-patient
facing MIPS eligible clinician for MIPS as an individual MIPS eligible
clinician that bills 100 or fewer patient-facing encounters (including
Medicare telehealth services defined in section 1834(m) of the Act)
during the non-patient facing determination period, and a group
provided that more than 75 percent of the NPIs billing under the
group's TIN meet the definition of a non-patient facing individual MIPS
eligible clinician during the non-patient facing determination period.
In regard to the threshold applying at the group level, we
recognize that groups vary in size and composition and thus, we believe
that a percentage-based approach applies such a threshold equally
across all types of groups. Also, we believe that a percentage-based
threshold for groups is a more appropriate and accurate approach for
distinguishing between groups composed of certain specialty or multi-
specialty practices that should be considered non-patient facing. We
are establishing a percentage-based threshold pertaining to groups
above 75 percent in order to succinctly identify whether or not the
majority of services furnished by groups are non-patient facing. We are
specifying that more than 75 percent of the NPIs billing under the
group's TIN would need to meet the
[[Page 77047]]
definition of a non-patient facing individual MIPS eligible clinician
in order for the group to be considered non-patient facing because such
a threshold is applicable to any group size and composition and clearly
delineates which groups furnish primarily non-patient facing services
while remaining consistent with the individual-level threshold. For
purposes of defining a non-patient facing MIPS eligible clinician as it
relates to groups, we believe that more than 75 percent is an adequate
percentage threshold. Based on the comments received regarding the
establishment of a separate non-patient facing threshold for groups, we
are seeking additional comment on our modified policy for future
consideration, which determines that a group would be considered non-
patient facing if more than 75 percent of the NPIs billing under the
group's TIN meet the definition of a non-patient facing individual MIPS
eligible clinician during the non-patient facing determination period.
Comment: One commenter indicated that clarification is needed on
how the requirements for each performance category would apply to
clinicians who do not have face-to-face encounters with patients.
Response: We refer readers to sections II.E.5.b., II.E.5.e., and
II.E.5.f. of this final rule with comment period, which describe the
requirements for each performance category pertaining to non-patient
facing MIPS eligible clinicians.
Comment: One commenter inquired about whether or not CMS would be
able to distinguish claims for patient-facing encounters from claims
for non-patient facing encounters to ensure that Part B claims for non-
patient facing encounters are not subject to the MIPS payment
adjustment.
Response: The statute makes it clear that the MIPS payment
adjustment applies to the amount otherwise paid under Medicare Part B
charges with respect to items and services furnished by a MIPS eligible
clinician during a year. We note that here is no carve-out for amounts
paid for claims for non-patient facing services given that the statute
does not grant the Secretary discretion to establish such a carve-out
through rulemaking.
Comment: One commenter requested that CMS include safeguards that
prevent unintended consequences of scoring newly introduced quality
measures. Specifically, the commenter indicated that the three proposed
population-based measures have rarely been, or ever, reported by
physician anesthesiologists. The three measures--Acute Conditions
Composite (Bacterial Pneumonia, Urinary Tract Infection and
Dehydration), Chronic Conditions Composite (Diabetes, Chronic
Obstructive Pulmonary Disease or Asthma, Heart Failure) and All-cause
Hospital Readmission Measure are measures that the physician
anesthesiologist would have little control over, especially since these
measures are calculated by CMS using administrative claims data. The
commenter indicated that the use of these measures would place
anesthesiology at a disadvantage to other MIPS eligible clinicians. The
commenter expressed concern that attribution of these measures to
individual physician anesthesiologists may prove to be equally or less
transparent than current measures under VM.
Response: We appreciate the commenter's concerns and note that, as
discussed in section II.E.5.b.(4) of this final rule with comment
period, we are establishing alternative requirements under the quality
performance category for non-patient facing MIPS eligible clinicians.
As discussed in section II.E.6.b.(2) of this final rule with comment
period, we may re-weight performance categories if there are not
sufficient measures applicable and available for each MIPS eligible
clinician in order to ensure that all MIPS eligible clinicians,
including those who are non-patient facing, are scored appropriately.
Lastly, as discussed in section II.E.5.b.(6) of this final rule with
comment period, we note that 2 of the 3 proposed population measures
are not being finalized. In section II.E.8.e. of this final rule with
comment period, we describe a validation process for claims and
registry submissions to validate whether MIPS eligible clinicians have
submitted all applicable measures when MIPS eligible clinicians submit
fewer than six measures.
Comment: One commenter requested clarification on how MIPS
incentives or penalties would be applied when facilities (for example,
hospitals) bill and collect the Medicare Part B payments through
reassignment from their hospital-based MIPS eligible clinicians. The
commenter indicated that as hospitals continue to employ primary care
clinicians and specialists and bill payers on their behalf, hospitals
are concerned that their Medicare Part B payments will be subject to
MIPS payment adjustments for poor final scores. The commenter inquired
about whether a hospital-based clinician would be required to
participate in MIPS. The commenter recommended that CMS consider the
consequences of applying a MIPS payment adjustment factor that may
adversely affect financially vulnerable hospitals, such as safety net
hospitals.
Response: We appreciate the commenter expressing concerns. We note
that the requirements described in this final rule with comment period
apply to MIPS eligible clinicians participating in MIPS as individual
MIPS eligible clinicians or groups and do not apply to hospitals
directly. In regard to the commenter's concern about the MIPS payment
adjustment affecting financially vulnerable hospitals and safety net
hospitals, section 1848(q)(6)(E) of the Act provides that the MIPS
payment adjustment is applied to the amount otherwise paid under Part B
for the items and services furnished by a MIPS eligible clinician
during a year (beginning with 2019). Thus, the MIPS payment adjustment
would apply to payments made for items and services furnished by MIPS
eligible clinicians for Medicare Part B charges billed such as those
under the PFS, but it would not apply to the facility payment to the
hospital itself under the inpatient prospective payment system (IPPS)
or other facility-based payment methodology. We refer readers to
sections II.E.1.c. and II.E.1.d. of this final rule with comment
period, which address MIPS eligible clinicians who practice in Method I
CAHs, Method II CAHs, RHCs, and FQHCs.
Comment: A commenter suggested that CMS focus on inpatient care,
rather than outpatient care, because savings are more achievable in the
inpatient setting (particularly in the last 6 months of life). The
commenter noted that the MIPS program should track hospitals, rather
than clinicians.
Response: We appreciate the suggestions from the commenter and will
consider them into consideration in future rulemaking.
Comment: Several commenters supported the inclusion of telehealth
services as patient-facing encounters. A few commenters described the
potential benefits of telehealth, including: Increasing access to
health care services that otherwise may not be available to many
patients, reducing avoidable hospitalizations for nursing facility
residents who otherwise may not receive early enough treatment, and
providing an option to help address clinician shortages. Another
commenter expressed concern that telehealth would become common and is
not a viable substitute for face-to-face patient care.
A few commenters discussed the definition of telehealth. One
commenter recommended a revision to the current Medicare telehealth
definition to reflect simple, plain language for MIPS
[[Page 77048]]
reporting and suggested the following, ``Telehealth means a health care
service provided to a patient from a provider at other location.''
Another commenter requested that CMS define and adopt a technology
neutral definition of telehealth that would allow MIPS eligible
clinicians to report the full range of evidence-based telehealth
services they provide, rather than limiting MIPS telehealth reporting
to be ``Medicare eligible telehealth services'' as defined at 42 CFR
410.78. One commenter requested that CMS expand the definition, use,
and reporting of telehealth services, and clearly distinguish between
MIPS eligible clinicians who are and are not patient-facing (for
example, radiology, physician-to-physician consult). Another commenter
suggested that CMS publish, at the beginning of a performance year, a
comprehensive list of each telehealth service cross-mapped to whether
it is determined to be patient-facing or non-patient facing.
Also, a few commenters recommended that telehealth services should
be restricted to true direct patient encounters (which would count
toward a threshold of patient-facing encounters) and exclude the use of
telehealth services by clinicians to consult with one another. One
commenter disagreed with the eligibility criteria for telehealth
services in contributing towards the scoring of the four performance
categories and recommended that CMS treat telehealth services the same
as all other in-person services for purposes of calculating MIPS
program requirements.
Response: We appreciate the support from commenters regarding our
proposal to include telehealth services in the definition of patient-
facing encounters. We note that telehealth services means the Medicare
telehealth services defined in section 1834(m) of the Act. Under the
PFS and for purposes of this final rule with comment period, Medicare
telehealth services that are evaluation and management services (the
denominators for the cross-cutting measures) are considered patient-
facing encounters, which will be made available at
QualityPaymentProgram.cms.gov. The list of all Medicare telehealth
services is located on the CMS Web site at https://www.cms.gov/Medicare/Medicare-General-Information/Telehealth/Telehealth-Codes.html.
For eligible telehealth services, the use of telecommunications
technology (real-time audio and video communication) substitutes for an
in-person encounter. Services furnished with the use of
telecommunications technology that do not use a real-time interactive
communication between a patient and clinician are not considered
telehealth services. Such services encompass circumstances in which a
clinician would be able to assess an aspect of a patient's condition
without the presence of the patient or without the interposition of
another clinician. In regard to the recommendation from commenters
requesting CMS to modify the definition of telehealth, we note that
section 1834(m) of the Act defines Medicare telehealth services and we
believe this is the appropriate definition for purposes of delineating
the scope of patient-facing encounters.
Comment: One commenter requested that the registration process for
non-patient facing MIPS eligible clinicians be very clear, and noted
that it is difficult to register in more than one place with multiple
logins and passwords. The commenter requested that CMS make sure that
the personnel handling the Quality Payment Program Service Center have
knowledge of areas such as pathology and radiology. The commenter also
recommended that CMS reach out to the specialty clinician community in
order for specialists to know that they need to register.
Response: We did not propose a registration process for non-patient
facing MIPS eligible clinicians. All MIPS eligible clinicians who meet
the definition of a non-patient facing MIPS eligible clinician will be
considered non-patient facing for the duration of a performance period.
In order for non-patient facing MIPS eligible clinicians to know in
advance of a performance period whether or not they qualify as a non-
patient facing MIPS eligible clinician, we will identify non-patient
facing individual MIPS eligible clinicians and groups based on the 24-
month non-patient facing determination period. The non-patient facing
determination period has an initial 12-month segment that would span
from the last 4 months of a calendar year 2 years prior to the
performance period followed by the first 8 months of the next calendar
year and include a 60-day claims run out, which will allow us to inform
MIPS eligible clinicians and groups of their non-patient facing status
during the month (December) prior to the start of the performance
period.
For purposes of the 2019 MIPS payment adjustment, we will initially
identify individual MIPS eligible clinicians and groups who are
considered non-patient facing MIPS eligible clinicians based on 12
months of data starting from September 1, 2015 to August 31, 2016. In
order to account for the identification of additional individual MIPS
eligible clinicians and groups that may qualify as non-patient facing
during the 2017 performance period, we will conduct another eligibility
determination analysis based on 12 months of data starting from
September 1, 2016 to August 31, 2017. In regard to the suggestion
regarding the Quality Payment Program Service Center, we strive to
ensure that any MIPS eligible clinician or group that will seeks
assistance through the Quality Payment Program Service Center will be
provided with adequate and consistent information pertaining to the
various components of MIPS.
After consideration of the public comments we received, we are
finalizing a modification to our proposal to define a non-patient
facing MIPS eligible clinician for MIPS at Sec. 414.1305 as an
individual MIPS eligible clinician that bills 100 or fewer patient-
facing encounters (including Medicare telehealth services defined in
section 1834(m) of the Act) during the non-patient facing determination
period, and a group provided that more than 75 percent of the NPIs
billing under the group's TIN meet the definition of a non-patient
facing individual MIPS eligible clinician during the non-patient facing
determination period. As noted above, we believe that it would be
beneficial for individual MIPS eligible clinicians and groups to know
in advance of a performance period whether or not they qualify as a
non-patient facing MIPS eligible clinician.
We establish the non-patient facing determination period for
purposes of identifying non-patient facing MIPS eligible clinicians in
advance of the performance period using historical claims data. This
eligibility determination process will allow us to identify non-patient
facing MIPS eligible clinicians prior to or shortly after the start of
the performance period. In order to conduct an analysis of the data
prior to the performance period, we are establishing an initial non-
patient facing determination period consisting of 12 months. The
initial 12-month segment of the non-patient facing determination period
would span from the last 4 months of a calendar year 2 years prior to
the performance period followed by the first 8 months of the next
calendar year and include a 60-day claims run out, which will allow us
to inform MIPS eligible clinicians and groups of their non-patient
facing status during the month (December) prior to the start of the
performance period. The second 12-month segment of the non-patient
facing determination period would span from the last 4 months of a
calendar year 1 year prior to the performance period followed by the
first
[[Page 77049]]
8 months of the performance period in the next calendar year and
include a 60-day claims run out, which will allow us to inform
additional eligible clinicians and groups of their non-patient status
during the performance period.
Thus, for purposes of the 2019 MIPS payment adjustment, we will
initially identify individual MIPS eligible clinicians and groups who
are considered non-patient facing MIPS eligible clinicians based on 12
months of data starting from September 1, 2015 to August 31, 2016. In
order to account for the identification of additional individual MIPS
eligible clinicians and groups that may qualify as non-patient facing
during the 2017 performance period, we will conduct another eligibility
determination analysis based on 12 months of data starting from
September 1, 2016 to August 31, 2017.
Similarly, for future years, we will conduct an initial eligibility
determination analysis based on 12 months of data (consisting of the
last 4 months of the calendar year 2 years prior to the performance
period and the first 8 months of the calendar year prior to the
performance period) to determine the non-patient facing status of
individual MIPS eligible clinicians and groups, and conduct another
eligibility determination analysis based on 12 months of data
(consisting of the last 4 months of the calendar year prior to the
performance period and the first 8 months of the performance period) to
determine the non-patient facing status of additional individual MIPS
eligible clinicians and groups. We will not change the non-patient
facing status of any individual MIPS eligible clinician or group
identified as non-patient facing during the first eligibility
determination analysis based on the second eligibility determination
analysis. Thus, an individual MIPS eligible clinician or group that is
identified as non-patient facing during the first eligibility
determination analysis will continue to be considered non-patient
facing for the duration of the performance period regardless of the
results of the second eligibility determination analysis. We will
conduct the second eligibility determination analysis to account for
the identification of additional, previously unidentified individual
MIPS eligible clinicians and groups that are considered non-patient
facing.
In addition, we consider a patient-facing encounter as the
evaluation and management services (the denominators for the cross-
cutting measures). Lastly, as noted above, we are finalizing our
proposal to include Medicare telehealth services (as defined in section
1834(m) of the Act) in the definition of patient-facing encounters. We
intend to publish a list of patient-facing encounters on the CMS Web
site located at QualityPaymentProgram.cms.gov.
c. MIPS Eligible Clinicians Who Practice in Critical Access Hospitals
Billing Under Method II (Method II CAHs)
Section 1848(q)(6)(E) of the Act provides that the MIPS payment
adjustment is applied to the amount otherwise paid under Part B for the
items and services furnished by a MIPS eligible clinician during a year
(beginning with 2019). In the case of MIPS eligible clinicians who
practice in CAHs that bill under Method I (``Method I CAHs''), the MIPS
payment adjustment would apply to payments made for items and services
billed by MIPS eligible clinicians under the PFS, but it would not
apply to the facility payment to the CAH itself. In the case of MIPS
eligible clinicians who practice in Method II CAHs and have not
assigned their billing rights to the CAH, the MIPS payment adjustment
would apply in the same manner as for MIPS eligible clinicians who bill
for items and services in Method I CAHs.
Under section 1834(g)(2) of the Act, a Method II CAH bills and is
paid for facility services at 101 percent of its reasonable costs and
for professional services at 115 percent of such amounts as would
otherwise be paid under Part B if such services were not included in
outpatient CAH services. In the case of MIPS eligible clinicians who
practice in Method II CAHs and have assigned their billing rights to
the CAHs, those professional services would constitute ``covered
professional services'' under section 1848(k)(3)(A) of the Act because
they are furnished by an eligible clinician and payment is ``based on''
the PFS. Moreover, this is consistent with the precedent CMS has
established by applying the PQRS and meaningful use payment adjustments
to Method II CAH payments. Therefore, we proposed that the MIPS payment
adjustment does apply to Method II CAH payments under section
1834(g)(2)(B) of the Act when MIPS eligible clinicians who practice in
Method II CAHs have assigned their billing rights to the CAH. We
requested comments on this proposal.
The following is a summary of the comments we received regarding
our proposal that the MIPS payment adjustment does apply to Method II
CAH payments under section 1834(g)(2)(B) of the Act when MIPS eligible
clinicians who practice in Method II CAHs have assigned their billing
rights to the CAH.
Comment: One commenter requested clarification regarding whether or
not clinicians who are part of a CAH would be considered a group and
required to participate MIPS.
Response: We note that clinicians meeting the definition of a MIPS
eligible clinician unless eligible for an exclusion, are generally
required to participate in MIPS. For MIPS eligible clinicians who
practice in Method I CAHs, the MIPS payment adjustment would apply to
payments made for items and services that are Medicare Part B charges
billed by MIPS eligible clinicians, but it would not apply to the
facility payment to the CAH itself. For MIPS eligible clinicians who
practice in Method II CAHs and have not assigned their billing rights
to the CAH, the MIPS payment adjustment would apply in the same manner
as for MIPS eligible clinicians who bill for items and services in
Method I CAHs. Moreover, in this final rule with comment period, we are
finalizing our proposal that the MIPS payment adjustment does apply to
Method II CAH payments under section 1834(g)(2)(B) of the Act when MIPS
eligible clinicians who practice in Method II CAHs have assigned their
billing rights to the CAH. We note that if a CAH is reporting as a
group, then MIPS eligible clinicians part of a CAH would be considered
a group as defined at Sec. 414.1305.
Comment: Several commenters stated that CMS must address the
problems with Method II Critical Access Hospital reporting prior to
Quality Payment Program implementation, particularly relating to the
attribution methodology and data capture issues. For example,
commenters suggested that CMS examine whether there are mechanisms for
better capturing information on MIPS eligible clinicians from the CMS-
1450 form. Another commenter expressed concerns that Method II CAH
participation in PQRS did not work as planned and the same issues may
affect Method II CAH participation in the Quality Payment Program such
as attribution issues may arise when any portion of the items and
services furnished by eligible clinicians are excluded from Medicare's
claims data database. The commenter believed that cost and quality
measures are skewed because most patients attributed to Method II CAH
facilities are institutionalized, causing them to appear to have much
higher costs and lower quality than the average, and because not all
CAH services are reported on CMS-1500 claim forms. Specifically,
commenters indicated that Method II CAHs see only a small portion of
their services reimbursed
[[Page 77050]]
under Medicare Part B, including hospital inpatient, swing bed, nursing
home, psychiatric and rehabilitation inpatient, and hospital outpatient
services rendered in non-CAH settings. Services rendered for
outpatients in the CAH setting (for example provider-based clinic,
observation, emergency room, surgery, etc.) are reimbursed through Part
A and are exempt from the Quality Payment Program. The commenters noted
that this results in beneficiaries who are less acute and low cost to
the Medicare program (those seen in clinic settings and those who have
avoided inpatient and post-acute care settings) being excluded in the
Quality Payment Program attribution, with only potentially high-cost
beneficiaries being counted. Therefore, while a CAH-based eligible
clinician may have a substantial portion of his or her patient
population in a low-cost category, the use of the PQRS attribution
methodology for MIPS could still easily result in the MIPS eligible
clinician being reported as high-cost if only high-cost patients are
included in the Quality Payment Program attribution. The commenters
recommended that all Method II CAH ambulatory services be included in
the attribution methodology of the Quality Payment Program.
For Method II claims, this would involve scrubbing outpatient
claims for services reported with professional revenue codes (96X, 97X
and 98X) that are matched up with the applicable CPT codes. Commenters
recommended an alternative, in which the Method II CAHs could be
benchmarked only against themselves. Commenters indicated that the
penalties would be relatively small, given that Method II CAHs bill
primarily under Part A, but the publishing of these negative scores on
Physician Compare will cause patients to seek care elsewhere, further
destabilizing the rural delivery system.
Response: We appreciate the commenters expressing their concerns
and note that MIPS eligible clinicians who practice in Method II CAHs
may be eligible for the low-volume threshold exclusion, in which such
eligible clinicians who do not exceed $30,000 of billed Medicare Part B
allowed charges or 100 Part B-enrolled Medicare beneficiaries would be
excluded from MIPS. We believe this exclusion will benefit eligible
clinicians who practice in Method II CAHs. We refer readers to section
II.E.10. of this final rule with comment period for final policies
regarding public reporting on Physician Compare.
Comment: One commenter suggested that CMS delay the start of the
MIPS program for MIPS eligible clinicians who practice in Method II
CAHs and have assigned their billing rights to the CAH.
Response: We appreciate the suggestion from the commenter. However,
we do not deem it necessary or justifiable to delay the participation
of MIPS eligible clinicians who provide services in Method II CAHs and
have assigned their billing rights to the CAH given that Method II CAHs
were required to participate in PQRS and the Medicare EHR Incentive
Program.
Comment: One commenter indicated that many clinicians who practice
in Method II CAHs would provide their clinical care in RHCs/FQHCs, and
as such, their only qualifying Part B charges would be documented in
the CAH's inpatient CEHRT. The commenter noted that while PQRS was
mandated for these clinicians, facilities face difficulty creating
quality PQRS reports based on extremely limited encounters. The
commenter also indicated that it is overly burdensome to require these
low-volume ``inpatient only'' CAH providers to participate in the MIPS
program until inpatient CEHRT software is required through the
certification process to produce NQF measure reports (on a clinician by
clinician basis) relevant to any and all CMS quality programs. The
commenter recommended that all clinicians who practice in Method II
CAHs be exempt from reporting under MIPS, similar to the provisions
established under the EHR Incentive Program that exempt hospital-based
EPs from the application of the meaningful use payment adjustment.
Response: We appreciate the concerns expressed by the commenter
regarding MIPS eligible clinicians who practice in Method II CAHs and
note that clinicians meeting the definition of a MIPS eligible
clinician, unless eligible for an exclusion, are generally required to
participate in MIPS (section II.E.3. of this final rule with comment
period describes the provisions pertaining to the exclusions from MIPS
participation). For MIPS eligible clinicians who practice in Method II
CAHs and have not assigned their billing rights to the CAH, the MIPS
payment adjustment would apply to payments made for items and services
billed by MIPS eligible clinicians under the PFS, but it would not
apply to the facility payment to the CAH itself. However, for MIPS
eligible clinicians who practice in Method II CAHs and have assigned
their billing rights to the CAH, the MIPS payment adjustment applies to
Method II CAH payments under section 1834(g)(2)(B) of the Act.
In section II.E.5.g.(8)(a)(i) of this final rule with comment
period, we noted that CAHs (and eligible hospitals) are subject to
meaningful use requirements under sections 1886(b)(3)(B) and (n) and
1814(l) of the Act, respectively, which were not affected by the
enactment of the MACRA. CAHs (and eligible hospitals) are required to
report on objectives and measures of meaningful use under the EHR
Incentive Program, as outlined in the 2015 EHR Incentive Programs final
rule. The objectives and measures of the EHR Incentive Programs for
CAHs (and eligible hospitals) are specific to these facilities, and are
more applicable and better represent the EHR technology available in
these settings. Section 1848(a)(7)(D) of the Act exempts hospital-based
EPs from the application of the payment adjustment under the EHR
Incentive Program and section 1848(a)(7)(B) of the Act provides the
authority to exempt an EP who is not a meaningful EHR user from the
application of the payment adjustment if it is determined that
compliance with the meaningful EHR user requirements would result in a
significant hardship, such as in the case of an EP who practices in a
rural area without sufficient internet access. The MACRA did not
maintain these statutory exceptions for the advancing care information
performance category under MIPS. Thus, the exceptions under sections
1848(a)(7)(B) and (D) of the Act are limited to the meaningful use
payment adjustment under section 1848(a)(7)(A) of the Act and do not
apply in the context of the MIPS program.
Section 1848(q)(5)(F) of the Act provides the authority to assign
different scoring weights (including a weight of zero) for each
performance category if there are not sufficient measures and
activities applicable and available to each type of MIPS eligible
clinician, including hospital-based clinicians. Accordingly, as
described in section II.E.5.g.(8)(a)(i) of this final rule with comment
period, we may assign a weight of zero percentage for the advancing
care information performance category for hospital-based MIPS eligible
clinicians. Under MIPS, we define a hospital-based MIPS eligible
clinician as a MIPS eligible clinician who furnishes 75 percent or more
of his or her covered professional services in sites of service
identified by the Place of Service (POS) codes 21, 22, and 23 used in
the HIPAA standard transaction as an inpatient hospital, on campus
outpatient hospital or emergency room setting in the year preceding the
performance period. Consistent with the
[[Page 77051]]
EHR Incentive Program, we will determine which MIPS eligible clinicians
qualify as ``hospital-based'' for a MIPS payment year.
Comment: One commenter requested that CMS address data capture
issues for CAHs that may be required to participate in the MIPS and
examine whether there are mechanisms for better capturing information
on eligible clinicians from the CMS-1450 form. Some CAHs have reported
issues with capturing full information about eligible clinicians from
the institutional billing form used by CAHs (UB-04/CMS-1450). Under
existing billing rules, CAHs may bill one CMS-1450 per day, with claims
from multiple providers are combined into one submission.
Response: We appreciate the commenter expressing these concerns and
intend to address operational and system-infrastructure issues
experienced under previously established CMS programs and ensure that
MIPS eligible clinicians have an improved experience when participating
in the MIPS program.
After consideration of the public comments we received, we are
finalizing our proposal that the MIPS payment adjustment will apply to
Method II CAH payments under section 1834(g)(2)(B) of the Act when MIPS
eligible clinicians who practice in Method II CAHs have assigned their
billing rights to the CAH.
d. MIPS Eligible Clinicians Who Practice in Rural Health Clinics (RHCs)
and/or Federally Qualified Health Centers (FQHCs)
As noted in section II.E.1.d. of the proposed rule (81 FR 28176),
section 1848(q)(6)(E) of the Act provides that the MIPS payment
adjustment is applied to the amount otherwise paid under Part B with
respect to the items and services furnished by a MIPS eligible
clinician during a year. Some eligible clinicians may not receive MIPS
payment adjustments due to their billing methodologies. If a MIPS
eligible clinician furnishes items and services in an RHC and/or FQHC
and the RHC and/or FQHC bills for those items and services under the
RHC's or FQHC's all-inclusive payment methodology, the MIPS adjustment
would not apply to the facility payment to the RHC or FQHC itself.
However, if a MIPS eligible clinician furnishes other items and
services in an RHC and/or FQHC and bills for those items and services
under the PFS, the MIPS adjustment would apply to payments made for
items and services. We note that eligible clinicians providing services
for a RHC or FQHC as an employee or contractor is paid by the RHC or
FQHC, not under the PFS. When a MIPS eligible clinician furnishes
professional services in an RHC and/or FQHC, the RHC bills for those
services under the RHC's all-inclusive rate methodology and the FQHC
bills for those services under the FQHC prospective payment system
methodology, in which the MIPS payment adjustment would not apply to
the RHC or FQHC payment. Therefore, we proposed that services rendered
by an eligible clinician that are payable under the RHC or FQHC
methodology would not be subject to the MIPS payments adjustments.
However, these eligible clinicians have the option to voluntarily
report on applicable measures and activities for MIPS, in which the
data received would not be used to assess their performance for the
purpose of the MIPS payment adjustment. We requested comments on this
proposal.
The following is a summary of the comments we received regarding
our proposal that services rendered by an eligible clinician that are
payable under the RHC or FQHC methodology would not be subject to the
MIPS payments adjustments.
Comment: Several commenters supported CMS' proposal that items and
services furnished by a MIPS eligible clinician that are payable under
the RHC or FQHC methodology would not be subject to the MIPS payment
adjustments.
Response: We appreciate the support from commenters.
Comment: One commenter noted that it is unclear what the
participation requirements are for MIPS eligible clinicians who
practice in FQHCs.
Response: In this final rule with comment period, we note that
items and services furnished by a MIPS eligible clinician that are
payable under the RHC or FQHC methodology would not be subject to the
MIPS payments adjustment. These MIPS eligible clinicians have the
option to voluntarily report on applicable measures and activities for
MIPS. If such MIPS eligible clinicians voluntarily participate in MIPS,
they would follow the requirements established for each performance
category. We note that the data received from such MIPS eligible
clinicians would not be used to assess their performance for the
purpose of the MIPS payment adjustment. However, items and services
furnished by a MIPS eligible clinician that are billed Medicare Part B
charges by the MIPS eligible clinician would be subject to the MIPS
payment adjustment. Also, we note that such MIPS eligible clinicians
who furnished items and services that are billed Medicare Part B
allowed charges by such MIPS eligible clinicians may be excluded from
the requirement to participate in MIPS if they do not exceed the low-
volume threshold as described in section II.E.3.c. of this final rule
with comment period.
Comment: Several commenters agreed with voluntary reporting of MIPS
data for FQHC and RHC clinicians as described in the proposed rule, and
recommended that quality reporting requirements should be matched with
HRSA measures. Commenters noted that drawing conclusions from the
initial data could be problematic based upon coding and documentation
differences compared to other clinicians reporting MIPS data. One
commenter requested that CMS not request FQHCs and RHCs to voluntarily
submit data. The commenter indicated such organizations have neither
the IT support nor administrative staff to submit extended data.
Response: We thank the commenters for expressing their concerns
regarding the comparability of data submitted by MIPS eligible
clinicians who practice in RHCs and FQHCs. We want to reiterate that
such MIPS eligible clinicians have the option to decide whether or not
they voluntarily participate in MIPS.
Comment: A few commenters requested CMS to ensure that FQHC
clinicians are not subject to MIPS for the limited number of FQHC-
related claims submitted under the PFS. Alternatively, one commenter
requested that fee service claims for non-specialty services furnished
by clinicians practicing in FQHCs or RHCs not be counted when
determining eligibility for the low-volume threshold.
Response: We appreciate the concern expressed by the commenter and
note that section 1848(q)(6)(E) of the Act provides that the MIPS
payment adjustment is applied to the amount otherwise billed under
Medicare Part B charges with respect to the items and services
furnished by a MIPS eligible clinician during a year. With respect to
the comment regarding the low-volume threshold, we refer readers to
section II.E.3.c. of this final rule with comment period, in which we
establish a low-volume threshold to identify MIPS eligible clinicians
excluded from participating in MIPS. We disagree with the
recommendation that the fee for service claims for non-specialty items
and services furnished by clinicians practicing in FQHCs or RHCs should
be excluded from the low-volume threshold eligibility determination. We
believe that the low-volume threshold established in this final rule
with comment period retains as MIPS eligible
[[Page 77052]]
clinicians those MIPS eligible clinicians who are treating relatively
few beneficiaries, but engage in resource intensive specialties, or
those treating many beneficiaries with relatively low-priced services.
We can meaningfully measure the performance and drive quality
improvement across the broadest range of MIPS eligible clinician types
and specialties. Conversely, it excludes MIPS eligible clinicians who
do not have a substantial quantity of interactions with Medicare
beneficiaries or furnish high cost services. Clinicians practicing in a
RHC or FQHC not exceeding the low-volume threshold would be excluded
from the MIPS requirements.
Comment: Several commenters indicated that RHCs should be
incentivized to participate and report quality data under the Quality
Payment Program. One commenter indicated that the voluntary
participation option is unlikely to be used without an incentive.
Another commenter recommended that CMS conduct a survey of RHCs before
it makes the effort to set up a voluntary reporting program that no one
is likely to use. The commenter's own survey found that without
incentives or penalties, very few RHCs would voluntarily participate in
MIPS, and found that an incentive payment of $10,000 per clinic per
year would prompt about half of RHCs to report under MIPS. A few
commenters suggested that CMS include RHCs in MIPS, as these are the
only primary care system left in the country with no tie to value.
Response: We appreciate the suggestions from commenters and will
consider them as we assess the volume of voluntary reporting under
MIPS.
Comment: One commenter expressed concern that under CMS' proposal
to exclude RHCs from MIPS, RHCs' patients will fail to benefit from the
rigorous quality measurement that comparable practices under MIPS
program will experience. The commenter is concerned about the growing
disparities in quality and life expectancy between rural and urban
patients. The commenter notes that the number of RHCs has grown from
400 in 1990 to more than 4,000 today, with new conversions continuing
as more rural providers realize they can get paid more than FFS under
this model.
Response: We thank the commenter for expressing concerns and note
that MIPS eligible clinicians who practice in RHCs and furnish items
and services that are payable under the RHC methodology have the option
to voluntarily report on applicable measures and activities for MIPS.
Comment: A few commenters requested that consideration be given to
phase-in requests for FQHC voluntary reporting to allow for the
development of social determinants of health status measure
adjustments.
Response: We appreciate the feedback on the role of socioeconomic
status in quality measurement. We continue to evaluate the potential
impact of social risk factors on measure performance. One of our core
objectives is to improve beneficiary outcomes, and we want to ensure
that complex patients as well as those with social risk factors receive
excellent care.
Comment: A few commenters supported CMS' proposal to be inclusive
of rural practices, but encouraged CMS to have special conditions for
such rural clinicians that have not participated in PQRS, VM, or the
Medicare EHR Incentive Program for EPs in the past and suggested a
phased approach for full participation that protects safety net
clinicians from downside risk.
Response: We appreciate the support from commenters and note that
MIPS eligible clinicians who practice in RHCs and furnish items and
services that are payable under the RHC methodology would not be
subject to the MIPS payments adjustments for such items and services,
but would have the option to voluntarily report on applicable measures
and activities for MIPS. For such MIPS eligible clinicians who
voluntarily participate in MIPS, the data submitted to CMS would not be
used to assess their performance for the purpose of the MIPS payment
adjustment.
Comment: One commenter recommended that CMS create a system
permitting the voluntary reporting of performance information by
excluded clinicians, and that the data reported be used to help define
rural-specific measures and standards for these clinicians and for all
rural clinicians. Under this system, data would be released only on an
aggregate basis, protecting the privacy of individual entities
reporting.
Response: We thank the commenter for the suggestions and will
consider them as we establish policies pertaining to MIPS eligible
clinicians who practice in RHCs and FQHCs in future rulemaking.
Comment: One commenter noted that in certain communities, clinical
services are delivered in RHCs, small independent practices and
community health centers, in which hospital-based services billed under
the PFS may only represent a small portion of total care provided. The
commenter requested that CMS develop a method for rural clinicians such
as those practicing in RHCs and FQHCs to have a meaningful avenue to
participate in the Quality Payment Program. Another commenter indicated
that RHCs, CAHs, and FQHCs were created to assure the availability of
health care services to remote and underserved populations, and while a
majority of clinicians who practice in RHCs, CAHs, and FQHCs bill under
Medicare Part A, may have a limited number of encounters for which
services are billed under Medicare Part B. Thus, such clinicians may
exceed the low-volume threshold and therefore be subject to the MIPS
payment adjustment. The commenter expressed concerns that RHCs, CAHs,
and FQHCs would be negatively impacted by having their resources
stretched even further if required to meet the requirements under MIPS
or be subject to a negative MIPS payment adjustment. The commenter also
noted that many RHCs and FQHCs have not implemented EHR technology due
to the lack of available resources and struggle to recruit qualified
clinicians and staff, and as a result, such clinicians and staff are
disproportionately older than the average health care workforce. If
RHCs and FQHCs are required to participate in MIPS and meet all
requirements or be subject to a negative MIPS payment adjustment, the
fiscal resources reduced by either a MIPS payment adjustment or
investment in EHR technology would significantly impact and reduce the
availability of services available to remote and underserved
populations. The commenter recommended that CMS consider permanent
exclusions for clinicians practicing in RHCs and FQHCs from the
requirement to participate in the MIPS program. One commenter noted
that CMS should provide exemptions from entire performance categories,
not just individual measures and activities, consider the feasibility
of shorter reporting timeframes, and ensure that there are free or low
cost reporting options within each MIPS performance category.
Response: We appreciate the commenters expressing their concerns
and providing recommendations. We will take into consideration the
suggestions from commenters in future rulemaking. We note that the MIPS
payment adjustment is limited to items and services furnished by MIPS
eligible clinicians for billed Medicare Part B charges such as those
under the PFS. We note that MIPS eligible clinicians practicing in RHCs
and FQHCs will benefit from other policies that we are finalizing
throughout this final rule with
[[Page 77053]]
comment period such as the higher low-volume threshold, lower reporting
requirements, and lower performance threshold.
Comment: One commenter requested clarification on how CMS would
define rural areas and suggested that CMS adopt a consistent definition
for the term ``small practices'' across all CMS programs. The commenter
suggested that a small practice be defined as having 25 or fewer
clinicians. Another commenter recommended that the low-volume threshold
be set at an even higher level for rural and underserved areas to
ensure that MIPS does not endanger the financial stability of rural
safety net practices or reduce access to services for rural Medicare
beneficiaries.
Response: We note that we define rural areas as clinicians in zip
codes designated as rural, using the most recent HRSA Area Health
Resource File data set available as described in section II.E.5.f.(5)
of this final rule with comment period. Also, in section II.E.5.f.(5)
of this final rule with comment period, we define small practices as
practices consisting of 15 or fewer clinicians. We are finalizing our
proposed definition of small practices because the statute provides
special considerations for small practices consisting of 15 or fewer
clinicians. In regard to the commenter's suggestion pertaining to the
low-volume threshold, we are finalizing a modification to our proposal,
which establishes a higher low-volume threshold as described in section
II.E.3.c. of this final rule with comment period.
Comment: Some commenters recommended that CMS follow the
recommendations of the NQF Report on Performance Measurement for Rural
Low-Volume Providers and establish rural peer groups and rural-specific
standards for assessment of rural provider performance in all domains.
Commenters noted that the NQF developed specific recommendations for
how pay-for-performance mechanisms should be implemented for rural
providers. The NQF Report on Performance Measurement for Rural Low-
Volume Providers sets out both overarching and specific approaches for
how rural provider performance measurement should be handled. The NQF
Report on Performance Measurement for Rural Low-Volume Providers also
makes recommendations about rural performance measures of domains other
than quality, including cost. One commenter noted that as rural-
specific quality measures are developed, such measures should be both
mandatory core measures and elective supplementary measures.
Response: We appreciate the recommendations provided by the
commenters and will take them into consideration for future rulemaking.
Comment: One commenter agreed with the goals of the proposed rule,
but believed that the proposed rule had one thematic deficiency as a
result of the quality reporting constructs, which implied a dichotomy
of ``primary care'' versus ``specialist'' with the correlate
implication that all specialists and specialties impact value of
current health care similarly (and generally adversely) and
marginalized specialties as leaders in care quality and efficiency
improvement. The commenter recommended that CMS create specialty-
specific quality and efficiency targets that incentivize specialists
caring for high risk, high-cost chronically ill patients to provide the
best long-term care and coordinate care with primary care physicians
(including chronic care subspecialists practicing across multiple
health systems rather than as part of a larger provider entity) with
each specialty having specific quality goals and efficiency targets.
Response: We appreciate the feedback from the commenter, but
disagree with commenter's assessment that our policies marginalize
specialists. We will take into consideration the recommendations
provided by the commenter for future rulemaking.
Comment: Due to complexity of the proposed rule and the extremely
short projected turnaround time before the start of the 2017
performance period, a few commenters recommended that Frontier Health
Professional Shortage Area (HPSA) clinicians should be exempt from
mandatory MIPS/APM participation until 2019, when the program has had a
chance to evaluate its successes and failures with respect to larger,
more economically stable participants. The commenters suggested that
Frontier HPSA clinicians should be allowed to voluntarily participate
if they want to, but they should not be penalized due to the low-
income, low-population challenges faced in extremely rural areas until
payment year 2021 or later.
Response: We note that the statute does not grant the Secretary
discretion to establish exclusions other than the three exclusions
described in section II.E.3. of this final rule with comment period.
Thus, Frontier HPSA clinicians who are MIPS eligible clinicians are
required to participate in MIPS. However, we believe that Frontier HPSA
clinicians will benefit from other policies that we are finalizing
throughout this final rule with comment period such as the higher low-
volume threshold, lower reporting requirements, and lower performance
threshold.
After consideration of the public comments we received, we are
finalizing our proposal that services rendered by an eligible clinician
under the RHC or FQHC methodology, will not be subject to the MIPS
payments adjustments. However, these eligible clinicians have the
option to voluntarily report on applicable measures and activities for
MIPS, in which the data received will not be used to assess their
performance for the purpose of the MIPS payment adjustment.
e. Group Practice (Group)
Section 1848(q)(1)(D) of the Act, requires the Secretary to
establish and apply a process that includes features of the PQRS group
practice reporting option (GPRO) established under section
1848(m)(3)(C) of the Act for MIPS eligible clinicians in a group for
purposes of assessing performance in the quality performance category.
In addition, it gives the Secretary the discretion to do so for the
other three performance categories. Additionally, we will assess
performance either for individual MIPS eligible clinicians or for
groups. As discussed in section II.E.2.b. of the proposed rule (81 FR
28177), we proposed to define a group at Sec. 414.1305 as a single
Taxpayer Identification Number (TIN) with two or more MIPS eligible
clinicians, as identified by their individual National Provider
Identifier (NPI), who have reassigned their Medicare billing rights to
the TIN. Also, as outlined in section II.E.2.c. of the proposed rule
(81 FR 28177), we proposed to define an APM Entity group at Sec.
414.1305 identified by a unique APM participant identifier. However, we
are finalizing a modification to the definition of a group as described
in section II.E.2.b. of this final rule with comment period and
finalizing the definition of an APM Entity group as described in
section II.E.2.c. of this final rule with comment period.
2. MIPS Eligible Clinician Identifier
To support MIPS eligible clinicians reporting to a single
comprehensive and cohesive MIPS program, we need to align the technical
reporting requirements from PQRS, VM, and EHR-MU into one program. This
requires an appropriate MIPS eligible clinician identifier. We
currently use a variety of identifiers to assess an individual eligible
clinician or group under different programs. For example, under the
PQRS for individual reporting, CMS uses a combination of TIN and NPI to
assess eligibility and
[[Page 77054]]
participation, where each unique TIN and NPI combination is treated as
a distinct eligible clinician and is separately assessed for purposes
of the program. Under the PQRS GPRO, eligibility and participation are
assessed at the TIN level. Under the Medicare EHR Incentive Program, we
utilize the NPI to assess eligibility and participation. And under the
VM, performance and payment adjustments are assessed at the TIN level.
Additionally, for APMs such as the Pioneer Accountable Care
Organization (ACO) Model, we also assign a program-specific identifier
(in the case of the Pioneer ACO Model, an ACO ID) to the
organization(s), and associate that identifier with individual eligible
clinicians who are, in turn, identified through a combination of a TIN
and an NPI.
In the MIPS and APMs RFI (80 FR 63484), we sought comments on which
specific identifier(s) should be used to identify a MIPS eligible
clinician for purposes of determining eligibility, participation, and
performance under the MIPS performance categories. In addition, we
requested comments pertaining to what safeguards should be in place to
ensure that MIPS eligible clinicians do not switch identifiers to avoid
being considered ``poor-performing'' and comments on what safeguards
should be in place to address any unintended consequences, if the MIPS
eligible clinician identifier were a unique TIN/NPI combination, to
ensure an appropriate assessment of the MIPS eligible clinician's
performance. In the MIPS and APMs RFI (80 FR 63484), we sought comment
on using a MIPS eligible clinician's TIN, NPI, or TIN/NPI combination
as potential MIPS eligible clinician identifiers, or creating a unique
MIPS eligible clinician identifier. The commenters did not demonstrate
a consensus on a single best identifier.
Commenters favoring the use of the MIPS eligible clinician's TIN
recommended that MIPS eligible clinicians should be associated with the
TIN used for receiving payment from CMS claims. They further commented
that this approach will deter MIPS eligible clinicians from ``gaming''
the system by switching to a higher performing group. Under this
approach, commenters suggested that MIPS eligible clinicians who bill
under more than one TIN can be assigned the performance and MIPS
payment adjustment for the primary practice based upon majority of
dollar amount of claims or encounters from the prior year.
Other commenters supported using unique TIN and NPI combinations to
identify MIPS eligible clinicians. Commenters suggested many eligible
clinicians are familiar with using TIN and NPI together from PQRS and
other CMS programs. Commenters also noted this approach can calculate
performance for multiple unique TIN/NPI combinations for those MIPS
eligible clinicians who practice under more than one TIN. Commenters
who supported the TIN/NPI also believed this approach enables greater
accountability for individual MIPS eligible clinicians beyond what
might be achieved when using TIN as an identifier and would provide a
safeguard from MIPS eligible clinicians changing their identifier to
avoid payment penalties.
Some commenters supported the use of only the NPI as the MIPS
identifier. They believed this approach would best provide for
individual accountability for quality in MIPS while minimizing
potential confusion because providers do not generally change their NPI
over time. Supporters of using the NPI only as the MIPS identifier also
commented that this approach would be simplest for administrative
purposes. These commenters also note the continuity inherent with the
NPI would address the safeguard issue of providers attempting to change
their identifier for MIPS performance purposes.
In the MIPS and APMs RFI (80 FR 63484), we also solicited feedback
on the potential for creating a new MIPS identifier for the purposes of
identifying MIPS eligible clinicians within the MIPS program. In
response, many commenters indicated they would not support a new MIPS
identifier. Commenters generally expressed concern that a new
identifier for MIPS would only add to administrative burden, create
confusion for MIPS eligible clinicians and increase reporting errors.
After reviewing the comments, we did not propose to create a new
MIPS eligible clinician identifier. However, we appreciated the various
ways a MIPS eligible clinician may engage with MIPS, either
individually or through a group. Therefore, we proposed to use multiple
identifiers that allow MIPS eligible clinicians to be measured as an
individual or collectively through a group's performance. We also
proposed that the same identifier be used for all four performance
categories; for example, if a group is submitting information
collectively, then it must be measured collectively for all four MIPS
performance categories: Quality, cost, improvement activities, and
advancing care information. As discussed in the final score methodology
section II.E.6. of the proposed rule (81 FR 28247 through 28248), we
proposed to use a single identifier, TIN/NPI, for applying the MIPS
payment adjustment, regardless of how the MIPS eligible clinician is
assessed. Specifically, if the MIPS eligible clinician is identified
for performance only using the TIN, we proposed to use the TIN/NPI when
applying the MIPS payment adjustment. We requested comments on these
proposals.
The following is a summary of the comments we received regarding
our proposals to use multiple identifiers that allow MIPS eligible
clinicians to be measured as an individual or collectively through a
group's performance and use a single identifier, TIN/NPI, for applying
the MIPS payment adjustment.
Comment: Several commenters supported the proposal to have each
unique TIN/NPI combination considered a different MIPS eligible
clinician and to use the TIN to identify group practices. One commenter
noted that using a group's billing TIN to identify a group is
consistent with the current CMS approach under PQRS and VM, and is
preferable to creating a new MIPS-specific identifier for groups.
Response: We appreciate the support from commenters.
Comment: One commenter noted that the proposed MIPS identifiers
(combination of TIN/NPI, etc.) would be sufficient for individual,
group, and APM reporting to MIPS, but requested that CMS establish an
identifier for virtual groups. Another commenter questioned the use of
these identifiers beyond their original purposes.
Response: We appreciate the feedback from the commenters. We did
not propose an identifier for virtual groups, but in future rulemaking,
we will take into consideration the establishment of a virtual group
identifier. As noted in this final rule with comment period, the use of
the identifiers enables us to identify individual MIPS eligible
clinicians at the TIN/NPI level and groups at the TIN level.
Comment: A few commenters opposed the approach of creating a new
MIPS eligible clinician identifier at the initiation of the Quality
Payment Program because it would be premature and cause confusion. The
commenter further noted that there may be times when a clinician is not
MIPS eligible and then becomes MIPS eligible. Also, the commenter
indicated that there is currently not a way to report the identifier on
claims.
Response: We disagree with the commenter and believe that it is
[[Page 77055]]
essential for us to be able to identify individual MIPS eligible
clinicians using a unique identifier because the MIPS payment
adjustment would be applied to the Medicare Part B charges billed by
individual MIPS eligible clinicians at the TIN/NPI level. We note that
we will be able to identify, at the NPI level, individual eligible
clinicians who are excluded from the MIPS requirements and not subject
to the MIPS payment adjustment for exclusions pertaining to new
Medicare-enrolled eligible clinicians and QPs and Partial QPs not
participating MIPS. In our analyses of claims data, we will be able to
identify individual MIPS eligible clinicians at the TIN/NPI level given
that billing is associated with a TIN or TIN/NPI.
Comment: One commenter recommended the use of TINs plus
alphanumeric codes as identifiers.
Response: We disagree with the commenter's suggestion to use a TIN
with an alphanumeric code because it would add complexity and not
facilitate the identification of individual eligible clinicians at the
NPI level who are associated with a group at the TIN level. For certain
exclusions (for example, new Medicare-enrolled eligible clinicians, and
QPs and Partial QPs who are not participating in MIPS), eligibility
determinations will be made and applied at the NPI level.
Comment: Several commenters requested that small physician
practices be exempt from MIPS. A few commenters indicated that
penalizing small practices would decrease access to care for patients.
One commenter indicated that small groups and independent physicians
are unfairly penalized and are being forced to integrate into larger
hospital or corporations. Another commenter expressed concern that
additional administrative duties will affect patient care and will not
improve healthcare. One commenter indicated that the proposed rule was
discriminatory toward solo or small group practices. The commenter
noted that the financial burden of MACRA will result in the closure of
many solo and small group practitioners.
Response: We appreciate the concerns expressed by the commenters.
We note that the statute does not grant the Secretary with discretion
to establish exclusions other than the exclusions described in section
II.E.3. of this final rule with comment period. However, we believe
that small practices will benefit from policies we are finalizing
throughout this final rule with comment period such as the higher low-
volume threshold, lower performance requirements, and lower performance
threshold.
Comment: A few commenters requested that CMS determine and state
eligibility status for clinicians providing services at independent
diagnostic testing facilities (IDTFs) and to provide clear, detailed
guidance under what circumstances eligibility would occur under MIPS.
The commenter noted that CMS has issued similar guidance under the PQRS
system of ``eligible but not able to participate''; however, the
commenter indicated that the guidance provided in PQRS does not address
all variations of billing and coding practices of IDTFs.
Response: We note that the MIPS payment adjustment applies only to
the amount otherwise paid under Part B with respect to items and
services furnished by a MIPS eligible clinician during a year. As
discussed in section II.E.7. of this final rule with comment period, we
will apply the MIPS adjustment at the TIN/NPI level. In regard to
suppliers of independent diagnostic testing facility services, we note
that such suppliers are not themselves included in the definition of a
MIPS eligible clinician. However, there may be circumstances in which a
MIPS eligible clinician would furnish the professional component of a
Part B covered service that is billed by such a supplier. Those
services could be subject to MIPS adjustment based on the MIPS eligible
clinician's performance during the applicable performance period.
Because, however, those services are billed by suppliers that are not
MIPS eligible clinicians, it is not operationally feasible for us at
this time to associate those billed allowed charges with a MIPS
eligible clinician at an NPI level in order to include them for
purposes of applying any MIPS payment adjustment.
Comment: One commenter expressed concern regarding the definition
of a group (unique TIN) because large health systems and hospitals
operate large medical groups spanning practices and specialties, and
all of them share a TIN and EHRs. The commenter indicated that grouping
all clinicians together takes away the advantages of group
participation. The commenter noted that CMS should generate another way
for group practices to differentiate themselves.
Response: We thank the commenter for expressing their concern. We
disagree with the commenter because we believe that group level
reporting is advantageous for groups in that it encourages
coordination, teamwork, and shared responsibility. However, we
recognize that we are not able to identify groups with eligible
clinicians who are excluded from the MIPS requirements both at the
individual level and group level such as new Medicare-enrolled
clinicians. We note that we could establish new identifiers to more
accurately identify such eligible clinicians. For future consideration,
we are seeking additional comment on the identifiers. What are the
advantages and disadvantages of identifying new Medicare-enrolled
eligible clinicians and eligible clinicians not included in the
definition of a MIPS eligible clinician until year 3 such as
therapists? What are the possible identifiers that could be established
for identifying such eligible clinicians?
Comment: One commenter requested clarification about how CMS
intends to treat group practices participating in MIPS in regard to
satisfying the ``hospital-based clinician'' definition, and questioned
if it would evaluate the group as a whole, or each individual within
the group. And if the latter, the commenter questioned if CMS would
adopt a process for scoring individuals in a group differently than the
overall group. Another commenter requested that CMS consider how the
definition of a group, and use of a single TIN, could represent
facility-based outpatient therapy clinicians. Currently, many facility-
based outpatient clinicians operate under the facility's TIN.
Response: We note that hospital-based MIPS eligible clinicians are
considered MIPS eligible clinicians are required to participate in
MIPS. However, section II.E.5.g.(8)(a)(i) of this final rule with
comment period describes our final policies regarding the re-weighting
of the advancing care information performance category within the final
score, in which we would assign a weight of zero when there are not
sufficient measures applicable and available for hospital-based MIPS
eligible clinicians.
In regard to how the definition of a group corresponds facility-
based outpatient clinicians, we noted that the MIPS payment adjustment
applies only to the amount otherwise paid under Part B with respect to
items and services furnished by a MIPS eligible clinician during a
year, in which we will apply the MIPS adjustment at the TIN/NPI level
(see section II.E.7. of this final rule with comment period). For items
and services furnished by such clinicians practicing in a facility that
are billed by the facility, such items and services may be subject to
MIPS adjustment based on the MIPS eligible clinician's performance
during the applicable performance period. For those billed Medicare
Part B allowed charges we are
[[Page 77056]]
able to associate with a MIPS eligible clinician at an NPI level, such
items and services furnished by such clinicians would be included for
purposes of applying any MIPS payment adjustment.
Comment: Several commenters recommended that CMS extend groups to
include multiple TINs and require that those TINs share and have access
to the same EHR. Commenters noted that group reporting would be
complicated by clinicians joining the group, and clinicians assigned to
multiple TINs using different EHR systems. The commenters also
expressed concern about the ability for groups to submit quality data
under the group reporting option using different types of EHRs.
Commenter requested the submission of multiple specialty specific data
sets and to alter the scoring methodology.
Response: We appreciate the commenters expressing their concerns
and providing their suggestions. We are finalizing the definition of a
group as proposed. We disagree with commenters that the definition of a
group should be modified in order to account for operational and
technical data mapping issues. We believe that the finalized definition
of a group provides groups with the opportunity to utilize its
performance data in ways that can improve coordination, teamwork, and
shared responsibility.
We do not believe that the definition of a group would create
complications for eligible clinicians associated with multiple TINs. We
note that individual eligible clinicians would be required to meet the
MIPS requirements for each TIN/NPI association unless they are excluded
from MIPS based on an exclusion established in section II.E.3. of this
final rule with comment period.
Comment: One commenter requested CMS to ensure that each service
provided to a patient is associated with the actual clinician
furnishing that service.
Response: We note that the MIPS payment adjustment for individual
MIPS eligible clinicians is applied to the Medicare Part B payments for
items and services furnished by each MIPS eligible clinician. For
groups reporting at the group level, scoring and the application of the
MIPS payment adjustment is applied at the TIN level for Medicare Part B
payments for items and services furnished by the eligible clinicians of
the group.
Comment: One commenter supported CMS' proposal for optional group
performance tracking and submission, but recommended that CMS provide
additional guidelines for clinicians who practice under multiple
identifiers. The commenter requested additional clarification on how
MIPS payment adjustments would impact clinicians working under multiple
identifiers at multiple organizations.
Response: We appreciate the support from the commenter. As
previously noted, individual eligible clinicians who are part of
several groups and thus, associated with multiple TINs, such individual
eligible clinicians would be required to participate in MIPS for each
group (TIN) association unless the eligible clinician (NPI) is excluded
from the MIPS. Section II.E.3.e. of this final rule with comment period
describes how the exclusion policies relate to groups with eligible
clinicians excluded from MIPS.
Comment: With many clinicians practicing within multiple TINs, one
commenter suggested that even though it is unclear how multiple-TIN
clinicians who choose individual reporting would be scored, CMS should
use the clinician's highest TIN performance score for each of the four
performance categories. Another commenter requested clarification on
how the Quality Payment Program rule will apply to clinicians who work
under multiple TINs, including the scenario where one TIN is
participating in an ACO and another is not.
Response: We note that groups have to the option to report at the
individual or group level. For individual eligible clinicians
associated with multiple TINs, the individual eligible clinician will
either report at the individual level if the group elects to report at
the individual or be included in the group-level reporting if the group
elects group-level reporting. As previously noted, individual eligible
clinicians who are associated with multiple TINs would be required to
participate in MIPS for each group (TIN) association unless the
eligible clinician (NPI) is excluded from the MIPS.
Comment: One commenter noted as a reminder to CMS that using TINs
as identifiers has caused some problems in the past such as the
accuracy of TINs. When TINs are not accurate, performance rates and
program metrics may be incorrect. The commenter recommended that CMS
establish clear and efficient mechanisms for groups to resolve
inconsistencies.
Response: We appreciate the feedback from the commenter and will
take into consideration the commenter's suggestions in future
rulemaking.
Comment: Several commenters supported the proposal to permit
clinicians to report either at the individual or group level. However,
one commenter expressed concern about limitations on the ability of
clinicians, in the context of group-level reporting, to report the most
appropriate and meaningful specialty measures. Another commenter
indicated that it was not clear how group reporting would allow for
specialty specific reporting, given the lack of a TIN for individual
departments within a larger faculty practice plan or physician group.
The commenter noted that this could cause thousands of providers to
miss out on the best use of MIPS because their facilities chose
reporting measures and activities that would not reflect the care they
individually provide. Therefore, the commenter suggested that CMS
create a reporting option within MIPS that would allow specialty-
specific groups to self-designate as ``group'' under MIPS even if they
were part of the TIN for a larger facility practice plan or physician
group. The commenter noted that this would facilitate the comparison of
physicians providing a similar mix of procedures for comparison for the
purpose of assigning a final score. Another commenter recommended that
CMS consider the common business model where large hospitals and health
systems acquire multiple physician practices.
Response: We appreciate the support from the commenters. We will
consider the recommendations from the commenters in future rulemaking.
We note that group-level reporting does not provide the option for
groups to report at sub-levels of the group by specialty. We believe
that group-level reporting ensures coordination, teamwork, and shared
responsibility.
Comment: A few commenters expressed concern regarding MIPS eligible
clinicians moving practices in the middle of a reporting period. One
commenter recommended that if a clinician changes TINs during the
course of a year, their final composite score should be attributed to
their final TIN on December 31 of that year. Another commenter
indicated that by using a TIN/NPI combination, CMS could accurately
match reporting data to an individual clinician because often the NPI
of the clinician will not change, and CMS could match the new TIN to
ensure accurate attribution.
Response: We appreciate the concerns and suggestions from the
commenters and note that individual MIPS eligible clinicians may be
associated with more than one TIN during the performance period due to
a variety of reasons with differing timeframes. In sections II.E.6. and
II.E.7. of this final rule with comment period, we describe how
individual MIPS eligible will have their performance assessed and
scored and
[[Page 77057]]
how the MIPS payment adjustment would be applied if a MIPS eligible
clinician changes TINs during the performance period.
Comment: One commenter expressed concern regarding how group size
would be calculated, particularly how clinicians that are not subject
to MIPS would be included in the size of the group.
Response: CMS does not make an eligibility determination regarding
a group size. We note that groups attest to their group size for
purpose of using the CMS Web Interface or a group identifying as a
small practice. In order for groups to determine their group size, we
note that a group size would be determined before exclusions are
applied.
Comment: One commenter recommended that CMS allow validation or
updating of clinicians' identifying information in the PECOS system,
and not a separate system.
Response: We appreciate the suggestion from the commenter and will
consider it as we operationalize the use of PECOS for MIPS.
After consideration of the public comments we received, we are
finalizing the use of multiple identifiers that allow MIPS eligible
clinicians to be measured as an individual or collectively through a
group's performance. Additionally, we are finalizing our proposal that
the same identifier be used for all four performance categories. For
example, if a group is submitting information collectively, then it
must be measured collectively for all four MIPS performance categories:
Quality, cost, improvement activities, and advancing care information.
While we have multiple identifiers for participation and performance,
we are finalizing the use of a single identifier, TIN/NPI, for applying
the MIPS payment adjustment, regardless of how the MIPS eligible
clinician is assessed (see final score methodology outlined in section
II.E.6. of this final rule with comment period). Specifically, if the
MIPS eligible clinician is identified for performance only using the
TIN, we will use the TIN/NPI when applying the MIPS payment adjustment.
a. Individual Identifiers
We proposed to use a combination of billing TIN/NPI as the
identifier to assess performance of an individual MIPS eligible
clinician. Similar to PQRS, each unique TIN/NPI combination would be
considered a different MIPS eligible clinician, and MIPS performance
would be assessed separately for each TIN under which an individual
bills. While we considered using the NPI only, we believe TIN/NPI is a
better approach for MIPS. Both TIN and NPI are needed for payment
purposes and using a combination of billing TIN/NPI as the MIPS
eligible clinician identifier allows us to match MIPS performance and
MIPS payment adjustments with the appropriate practice, particularly
for MIPS eligible clinicians that bill under more than one TIN. In
addition, using TIN/NPI also provides the flexibility to allow
individual MIPS eligible clinician and group reporting, as the proposed
group identifiers also include TIN as part of the identifier. We
recognize that TIN/NPI is not a static identifier and can change if an
individual MIPS eligible clinician changes practices and/or if a group
merges with another between the performance period and payment
adjustment period. Section II.E.7.a. of the proposed rule describes in
more detail how we proposed to match performance in cases where the
TIN/NPI changes. We requested comments on this proposal.
The following is a summary of the comments we received regarding
our proposal to use a combination of billing TIN/NPI as the identifier
to assess performance of an individual MIPS eligible clinician.
Comment: One commenter expressed concern that independent
physicians would not fare well as a result of the proposed rule.
Response: We appreciate the concern expressed by the commenter. We
believe that independent clinicians will benefit from policies we are
finalizing throughout this final rule with comment period such as the
higher low-volume threshold, lower performance requirements, and lower
performance threshold.
Comment: One commenter found the MIPS terminology confusing and
believed that tracking individual clinicians for reimbursement, as
outlined in the proposed rule, would be difficult.
Response: We appreciate the feedback from the commenter and will
consider the ways we can explain the MIPS requirements to ensure that
information is clear, understandable, and consistent.
Comment: Several commenters requested clarification regarding how
individual MIPS eligible clinicians who bill to multiple TINs would
have their performance assessed. Commenters questioned if they are
eligible for MIPS payment adjustment under multiple TINs, if they are
expected to perform under all four categories for each TIN where they
practice, and how a Partial QP and individual in a group practice would
be assessed for purposes of the 2019 MIPS payment adjustment based on
the TIN/NPI combination.
Response: For MIPS eligible clinicians associated with multiple
TINs, we note that MIPS eligible clinicians will need to meet the MIPS
requirements for each TIN they are associated with unless they are
excluded from the MIPS requirements based on one of the three
exclusions (as described in section II.E.3. of this final rule with
comment period) at the individual and/or group level.
Comment: One commenter questioned the benefit to clinicians
reporting at the TIN/NPI level compared to the NPI level.
Response: We note that groups have the option to report at the
individual (TIN/NPI) level or the group (TIN) level. Depending on the
composition of groups, groups may find that reporting at the individual
level may be more advantageous for the group than the reporting at the
group level and vice versa. Individual eligible clinicians who are not
part of a group, would report at the individual level.
Comment: To facilitate individual clinician-level information, one
commenter recommended that CMS use the NPI identifier throughout the
MIPS program. The commenter noted that the NPI is also used by the
private sector, promoting greater alignment than would a newly created
MIPS clinician identifier.
Response: We appreciate the suggestion from the commenter, but
disagree with the commenter that we should establish an identifier only
at the NPI level because we need to be able to not only account for
individual NPIs, but we need to have a capacity that allows us to
identify eligible clinicians and MIPS eligible clinicians who are
associated with a group given that group level reporting is an option
and scoring and MIPS payment adjustments would need be applied
accordingly. As a result, we are finalizing the individual MIPS
eligible clinician identifier using the TIN/NPI combination.
Comment: One commenter requested clarification on how clinicians
using only a TIN will be scored, and then have their payment adjusted
based on the TIN/NPI.
Response: We note that groups reporting at the group level will be
assessed and scored, at the TIN level and have a MIPS payment
adjustment applied at the TIN/NPI level. We note that the MIPS payment
adjustment is applied to the MIPS eligible clinicians within the TIN
for billed Medicare Part B charges.
[[Page 77058]]
After consideration of the public comments we received, we are
finalizing our proposed definition of a MIPS eligible clinician at
Sec. 414.1305 to use a combination of unique billing TIN and NPI
combination as the identifier to assess performance of an individual
MIPS eligible clinician. Each unique TIN/NPI combination will be
considered a different MIPS eligible clinician, and MIPS performance
will be assessed separately for each TIN under which an individual
bills. We recognize that TIN/NPI is not a static identifier and can
change if an individual MIPS eligible clinician changes practices and/
or if a group merges with another between the performance period and
payment adjustment period. We refer readers to section II.E.7.a. of
this final rule with comment period, which describes our final policy
for matching performance in cases where the TIN/NPI changes.
b. Group Identifiers for Performance
We proposed the following way a MIPS eligible clinician may have
their performance assessed as part of a group under MIPS. We proposed
to use a group's billing TIN to identify a group. This approach has
been used as a group identifier for both PQRS and VM. The use of the
TIN would significantly reduce the participation burden that could be
experienced by large groups. Additionally, the utilization of the TIN
benefits large and small practices by allowing such entities to submit
performance data one time for their group and develop systems to
improve performance. Groups that report on quality performance measures
through certain data submission methods must register to participate in
MIPS as described in section II.E.5.b. of the proposed rule.
We proposed to codify the definition of a group at Sec. 414.1305
as a group that would consist of a single TIN with two or more MIPS
eligible clinicians (as identified by their individual NPI) who have
reassigned their billing rights to the TIN. We requested comments on
this proposal.
The following is a summary of the comments we received regarding
our proposal establishing the way a MIPS eligible clinician may have
their performance assessed as part of a group under MIPS.
Comment: Several commenters expressed concern regarding the group
identifier. Commenters indicated that a group identifier restricts
group reporting to TIN-level identification because TINs may represent
many different specialties and subspecialists that have elected to join
together for non-practice related reasons, such as billing purposes.
Commenters recommended that CMS allow TINs to subdivide into smaller
groups for the purposes of participating in MIPS. A few commenters
recommended that CMS expand the definition of a group to include
subsets in a TIN so that groups of specialists or sub-specialists
within a TIN can be allowed to group accordingly. One commenter
suggested expanding the allowable group identifiers for physician
groups to include a group's sub-tax identification numbers based on the
Medicare PFS area or the hospital payment area in which they provide
care. A few commenters encouraged CMS to consider providing additional
flexibility to allow clinicians to submit group rosters of TIN/NPI
combinations to CMS to define a MIPS reporting group. The commenters
noted that this approach would allow a large, multispecialty group
under one TIN to split into clinically-relevant reporting groups, or
multiple TINs within a delivery system to group report under a common
group. In addition to the options that CMS proposed regarding use of
multiple identifiers to assess physician/group performance under MIPS,
one commenter recommended that CMS permit groups to ``split'' TINs for
this purpose. Another commenter noted that such flexibility would be a
very useful precursor to future APM participation.
Response: We appreciate the commenters expressing their concerns
and providing recommendations. We recognize that groups have varying
compositions of eligible clinicians and will consider the suggestions
from commenters in future rulemaking. We disagree with commenters
regarding their suggested approach for defining a group because
multiple sublevel identifiers create more complexity given that it
would require the establishment of numerous identifiers in order to
account for all types of group compositions. We note that except for
groups that contain APM participants, we are not permitting groups to
``split'' TINs if they choose to participate in MIPS as a group. We
believe it is critical to establish the definition of a group that
ensures coordination, teamwork, and shared responsibility at the group
level, in which our proposed definition achieves this objective. We
note that groups have the opportunity to analyze its data in ways that
are meaningful to the group, which may include analyses for each
segment of a group to promote and enhance the coordination of care and
improve the quality of care and health outcomes.
Comment: Several commenters supported the proposed approach to
reduce the participation burden by allowing large groups to report as a
group. One commenter requested clarification on how a group's
performance and final score would be applied to all NPIs in the TIN,
particularly whether CMS would assess each individual across the four
performance categories and then cumulatively calculate the final score
or whether CMS would assess a group-based collective set of objectives
that could be met by any combination of individual clinicians inside
the group to calculate the final score.
Response: In section II.E.3.d. of this final rule with comment
period, we note that groups reporting at the group level (TIN) must
meet the definition of a group at all times during the performance
period for the MIPS payment year. In order for groups to have their
performance assessed as a group across all performance categories,
individual eligible clinicians and MIPS eligible clinicians within a
group must aggregate their performance data across the TIN.
Comment: One commenter indicated that the scoring methodology for
large TINs is ambiguous.
Response: We note that the scoring methodology for groups,
regardless of size, is the same as described in section II.E.6. of this
final rule with comment period.
Comment: One commenter requested further clarification of
attribution of eligible activities (for example, improvement
activities) for one organization with one TIN that participates in MIPS
and multiple APMs.
Response: For those TINs that have MIPS eligible clinicians that
are subject to the APM scoring standard, we refer readers to section
II.E.5.h. of this final rule with comment period for our discussion
regarding policies pertaining to the APM scoring standard.
Comment: Several commenters agreed with our proposal to not require
an additional identifier for qualified clinicians and instead use a
combination of MIPS eligible clinician NPI and group billing TIN. To
ease the administrative burden, commenters recommended the following:
have attribution of a qualified clinician to a group's billing TIN be
done automatically by CMS based on billing PECOS data; do not require
individual third party rights for qualified clinicians, but instead let
program administrators at each health system register for their groups
and automatically have access to qualified
[[Page 77059]]
clinicians associated with that TIN; and provide for the ability to
look up statuses, eligibility, program history and other information by
both individual NPI and group TIN.
Response: We appreciate the recommendations from the commenters and
will consider them as we establish subregulatory guidance regarding the
voluntary registration process for groups and the registration process
for groups electing to use the CMS Web Interface data submission
mechanism and/or administer the CAHPS for MIPS survey.
Comment: Several commenters requested that CMS consistently define
``small'' practices and consider additional accommodations for such
practices. Commenter noted that the proposal may overburden smaller
groups. There were a few commenters indicating that solo or small
practices with less than 25 clinicians should be exempt from MIPS while
other commenters recommended that group practices of 15 or fewer
clinicians be exempt from MIPS. One commenter suggested that CMS review
opportunities to provide incentives targeted around quality metrics
reflective of the patient population served.
Response: We note that a small practice is defined as a practice
consisting of 15 or fewer eligible clinicians. We note that the statute
does not provide the discretion to establish exclusions other than the
exclusions pertaining to new Medicare-enrolled eligible clinicians, QPs
and Partial QPs who do not participate in MIPS, and eligible clinicians
who do not exceed the low-volume threshold. However, small groups may
be excluded from MIPS if they do not exceed the low-volume threshold as
established in section II.E.3.c. of this final rule with comment
period.
Comment: One commenter requested that post-acute and long-term care
practices be considered separately in this proposal. The commenter
indicated that grouping them with their specialty peers practicing in a
traditional ambulatory setting creates inequities. In particular, the
commenter noted that benchmarks and thresholds are not comparable due
to the different natures of the types of practice.
Response: We recognize that groups will have varying compositions
and note that groups have the option to report at the individual level
or group level. In section II.E.3.c. of this final rule with comment
period, we describe the low-volume threshold exclusion which is applied
at the individual eligible clinician level or the group level. A group
that would not be excluded from MIPS when reporting at a group level
may find it advantageous to report at the individual level.
After consideration of the public comments we received, we are
finalizing a modification to our proposal regarding the use of a
group's billing TIN to identify a group. Thus, we are codifying the
definition of a group at Sec. 414.1305 to mean a group that consists
of a single TIN with two or more eligible clinicians (including at
least one MIPS eligible clinician), as identified by their individual
NPI, who have reassigned their billing rights to the TIN.
c. APM Entity Group Identifier for Performance
We proposed the following way to identify a group to support APMs
(see section II.F.5.b. of this rule). To ensure we have accurately
captured all of the eligible clinicians identified as participants that
are participating in the APM Entity, we proposed that each eligible
clinician who is a participant of an APM Entity would be identified by
a unique APM participant identifier. The unique APM participant
identifier would be a combination of four identifiers: (1) APM
Identifier (established by CMS; for example, XXXXXX); (2) APM Entity
identifier (established under the APM by CMS; for example, AA00001111);
(3) TIN(s) (9 numeric characters; for example, XXXXXXXXX); (4) EP NPI
(10 numeric characters; for example, 1111111111). For example, an APM
participant identifier could be APM XXXXXX, APM Entity AA00001111, TIN-
XXXXXXXXX, NPI-11111111111.
We proposed to codify the definition of an APM Entity group at
Sec. 414.1305 as an APM Entity identified by a unique APM participant
identifier. We requested comments on these proposals. See section
II.E.5.h. of the proposed rule for proposed policies regarding
requirements for APM Entity groups under MIPS.
The following is a summary of the comments we received regarding
our proposal establishing the way each eligible clinician who is a
participant of an APM Entity would be identified by a unique APM
participant identifier.
Comment: Several commenters supported the approach to identify APM
professionals by a combination of APM identifier, APM entity
identifier, TIN and NPI. Commenters requested that CMS make the QP
identifiers available via an application program interface (API), which
would improve an APM participant's ability to provide accurate and
timely reports. However, one commenter recommended that an APM Entity
group be defined using a unique APM participant identifier composed of
a combination of four, cross-referenced identifiers: APM ID, MIPS ID,
TIN, and NPI. The commenter shared that their Shared Savings Program
experience with their ACO Identifier has been very positive, and
suggested that MIPS adopt a similar definition and use the APM-MIPS ID
for day-to-day APM identification, versus the proposed alternative.
Response: We appreciate the support and suggestions from the
commenters. As we operationalize the process for APM Entity
identifiers, we will taking into consideration the recommendation of
making the QP identifier available via an API. In regard to suggestion
regarding the APM Entity group identifier, we do not believe it is
necessary to create an additional MIPS ID for the purposes of tracking
APM Entities under MIPS. We further note that for all APMs, the APM
Entity identifiers are the same identifiers that are currently used by
CMS for other purposes. For example, in the case of the Shared Savings
Program, since ACOs are the participating APM Entity, the APM Entity
identifier would be the same as the ACO Identifier. We believe that
tracking APM Entity participation in this way is most consistent with
how CMS currently tracks APM Entity participation, and eliminates any
unnecessary burden of tracking any new, additional identifiers.
Comment: One commenter requested clarification on the use of the
APM participant identifier and whether the APM participant identifier
would be a required data element for submission.
Response: We note that the APM Identifier will be used to ensure
accurate tracking of all APM participants and comprised of the four
already existing identifiers that are described in this section. In
regard to the data elements required for the submission of data via a
submission mechanism, the required data elements will depend on the
requirements for each data submission mechanism. The submission
procedures for each data submission mechanism will be further outlined
in subregulatory guidance.
Comment: One commenter did not support the proposal regarding how
an APM Entity group would be defined. The commenter requested
clarification as to why an APM participant could not be identified by a
combination of TIN/NPI, and a single character prefix or suffix to
denote the eligible clinician is part of an APM entity.
Response: We appreciate the feedback from the commenter. We note
that our proposal to use the APM ID, APM Entity Identifier, TIN and NPI
is most consistent with how APM participation
[[Page 77060]]
is currently tracked within our systems. Introducing another method of
identification, such as a single character prefix or suffix, would be a
deviation from our already existing operational processes, and we do
not foresee that such a deviation would add any program efficiencies or
facilitate participant tracking.
Comment: One commenter did not support mandatory reporting and
participation, and indicated that ACOs are an example of forcing
participation in alternative payment models resulting in the failure to
save money and difficulties to retain participants.
Response: We appreciate the concerns from the commenter and note
that participation in MIPS is mandatory while participation in an ACO
(or APM) is voluntary. Based on the results generated to date under the
Shared Savings Program, the data suggests that the longer organizations
stay in the Shared Savings Program, the more likely they are able to
achieve savings. Also, the number of organizations participating in the
Shared Savings Program is increasing annually.
Comment: One commenter recommended that CMS take into account the
burden placed on certain subspecialties that may not and will not have
the flexibility to participate in many current APMs. Another commenter
recommended that CMS identify specialties and subspecialties currently
unable to participate in Advanced APMs and establish ways to minimize
their burden and risk of receiving a penalty under MIPS.
Response: We thank the commenters for expressing their concerns. As
we develop the operational elements of the MIPS program, we strive to
establish a process ensuring that participation in MIPS can be
successful. Based on the experience and feedback provided by
stakeholders regarding previously established CMS programs, we are
improving and enhancing the user-experience for MIPS. We will continue
to seek stakeholder feedback as we implement the MIPS program.
After consideration of the public comments we received, we are
finalizing our proposal that each eligible clinician who is a
participant of an APM Entity will be identified by a unique APM
participant identifier. The unique APM participant identifier will be a
combination of four identifiers: (1) APM Identifier (established by
CMS; for example, XXXXXX); (2) APM Entity identifier (established under
the APM by CMS; for example, AA00001111); (3) TIN(s) (9 numeric
characters; for example, XXXXXXXXX); (4) EP NPI (10 numeric characters;
for example, 1111111111). For example, an APM participant identifier
could be APM XXXXXX, APM Entity AA00001111, TIN-XXXXXXXXX, NPI-
11111111111. Thus, we are codifying the definition of an APM Entity
group at Sec. 414.1305 to mean a group of eligible clinicians
participating in an APM Entity, as identified by a combination of the
APM identifier, APM Entity identifier, Taxpayer Identification Number
(TIN), and National Provider Identifier (NPI) for each participating
eligible clinician.
3. Exclusions
a. New Medicare-Enrolled Eligible Clinician
Section 1848(q)(1)(C)(v) of the Act provides that in the case of a
professional who first becomes a Medicare-enrolled eligible clinician
during the performance period for a year (and had not previously
submitted claims under Medicare either as an individual, an entity, or
a part of a physician group or under a different billing number or tax
identifier), that the eligible clinician will not be treated as a MIPS
eligible clinician until the subsequent year and performance period for
that year. In addition, section 1848(q)(1)(C)(vi) of the Act clarifies
that individuals who are not deemed MIPS eligible clinicians for a year
will not receive a MIPS payment adjustment. Accordingly, we proposed at
Sec. 414.1305 that a new Medicare-enrolled eligible clinician be
defined as a professional who first becomes a Medicare-enrolled
eligible clinician within the PECOS during the performance period for a
year and who has not previously submitted claims as a Medicare-enrolled
eligible clinician either as an individual, an entity, or a part of a
physician group or under a different billing number or tax identifier.
These eligible clinicians will not be treated as a MIPS eligible
clinician until the subsequent year and the performance period for such
subsequent year. As discussed in section II.E.4. of the proposed rule
(81 FR 28179 through 28181), we proposed that the MIPS performance
period would be the calendar year (January 1 through December 31) 2
years prior to the year in which the MIPS payment adjustment is
applied. For example, an eligible clinician who newly enrolls in
Medicare within PECOS in 2017 would not be required to participate in
MIPS in 2017, and he or she would not receive a MIPS payment adjustment
in 2019. The same eligible clinician would be required to participate
in MIPS in 2018 and would receive a MIPS payment adjustment in 2020,
and so forth. In addition, in the case of items and services furnished
during a year by an individual who is not an MIPS eligible clinician,
there will not be a MIPS payment adjustment applied for that year. We
also proposed at Sec. 414.1310(d) that in no case would a MIPS payment
adjustment apply to the items and services furnished by new Medicare-
enrolled eligible clinicians. We requested comments on these proposals.
The following is a summary of the comments we received regarding
our proposals to define a new Medicare-enrolled eligible clinician as a
professional who first becomes a Medicare-enrolled eligible clinician
within the PECOS during the performance period for a year and who has
not previously submitted claims under Medicare either as an individual,
an entity, or a part of a physician group or under a different billing
number or tax identifier, that the eligible clinician would not be
treated as a MIPS eligible clinician until the subsequent year and
performance period for such subsequent year, that a MIPS payment
adjustment would not be applied in the case of items and services
furnished during a year by an individual who is not an MIPS eligible
clinician, and that in no case would a MIPS payment adjustment apply to
the items and services furnished by new Medicare-enrolled eligible
clinicians.
Comment: One commenter recommended postponing the implementation of
the ``new'' types of clinicians to a later effective date.
Response: We appreciate the suggestion from the commenter, but note
that we do not find it necessary or justifiable to postpone the
implementation of the new Medicare-enrolled eligible clinician
provision.
Comment: One commenter requested clarification on how CMS would
require clinicians who are new Medicare-enrolled eligible clinicians to
participate in MIPS after their first 12 months of Medicare enrollment
passed.
Response: We note that section 1848(q)(1)(C)(v) of the Act provides
that in the case of a professional who first becomes a Medicare-
enrolled eligible clinician during the performance period for a year
(and had not previously submitted claims under Medicare either as an
individual, an entity, or a part of a physician group or under a
different billing number or tax identifier), that the eligible
clinician will not be treated as a MIPS eligible clinician until the
subsequent year and performance period for that year. We note that new
Medicare-enrolled eligible clinicians are excluded from MIPS during the
performance period in which they are
[[Page 77061]]
identified as being a new Medicare-enrolled eligible clinicians. For
example, if an eligible clinician becomes a new Medicare-enrolled
eligible clinician in April of a particular year, such eligible
clinician would be excluded from MIPS until the subsequent year and
performance period for that year, in which such eligible clinician
would be required to participate in MIPS starting in January of the
next year.
Moreover, section 1848(q)(1)(C)(vi) of the Act clarifies that
individuals who are not deemed MIPS eligible clinicians for a year will
not receive a MIPS payment adjustment. Accordingly, we define a new
Medicare-enrolled eligible clinician as a professional who first
becomes a Medicare-enrolled eligible clinician within the PECOS during
the performance period for a year and who has not previously submitted
claims as a Medicare-enrolled eligible clinician either as an
individual, an entity, or a part of a physician group or under a
different billing number or tax identifier. These eligible clinicians
will not be treated as a MIPS eligible clinician until the subsequent
year and the performance period for such subsequent year. Thus, such
eligible clinicians would be treated as a MIPS eligible clinician in
their subsequent year of being a Medicare-enrolled eligible clinician,
required to participate in MPS, and subject to the MIPS payment
adjustment for the performance period of that subsequent year.
Comment: One commenter requested clarification on clinicians'
eligibility under MIPS and their designation on whether they are
Medicare or Medicaid-enrolled from year to year.
Response: In section II.E.1.a. of this final rule with comment
period, we define a MIPS eligible clinician. Clinicians meeting the
definition of a MIPS eligible clinician are required to participate in
MIPS unless eligible for an exclusion as defined in section II.E.3. of
this final rule with comment period. For purposes of MIPS, we are able
to identify an eligible clinician who first becomes a Medicare-enrolled
eligible clinician within the PECOS during the performance period for a
year and who has not previously submitted claims as a Medicare-enrolled
eligible clinician either as an individual, an entity, or a part of a
physician group or under a different billing number or tax identifier.
Comment: Several commenters supported the exclusion of new
Medicare-enrolled eligible clinicians from MIPS; however, commenters
indicated that it is unreasonable to require new Medicare-enrolled
eligible clinicians to begin participating in MIPS during the next
performance period, especially those that become new Medicare-enrolled
eligible clinicians later in the year. The commenters recommended
giving new Medicare-enrolled eligible clinicians the option of being
excluded from MIPS in both the performance period in which they begin
treating Medicare patients and in the following performance period. One
commenter opposed CMS's proposal that clinicians newly enrolling in
Medicare in 2017 would have to participate in MIPS starting January 1,
2018, and requested that CMS instead extend the window so that
clinicians enrolling in Medicare in 2017 would not begin participation
until January 1, 2019. Another commenter suggested that CMS consider
new Medicare-enrolled eligible clinicians ineligible for MIPS until the
first performance period following at least 12 months of enrollment in
Medicare.
Response: We thank the commenters for expressing their concerns.
While the statute does not give the Secretary discretion to further
delay MIPS participation for these eligible clinicians, we note that in
the transition year (CY 2017) and performance period for such year in
which an eligible clinician is treated as a MIPS eligible clinician,
the clinician may qualify for an exclusion under the low-volume
threshold. We refer readers to section II.E.3.c. of this final rule
with comment period, which further describes the low-volume threshold
provision.
Comment: A few commenters supported CMS' proposal that a new
Medicare-enrolled eligible clinician would not be eligible to
participate in the MIPS program until the subsequent performance
period.
Response: We appreciate the support from the commenters.
Comment: A few commenters offered recommendations pertaining to
exemptions that CMS should consider. One commenter suggested that
medical/surgical practices of 15 professionals or fewer be fully exempt
from MIPS; otherwise, many Medicare patients risk losing access to
physicians who have cared for them for many years. Another commenter
recommended that MIPS eligible clinicians who are a Tier 1 or part of a
Center of Excellence or a High Quality Provider with a private insurer
should be exempt from penalties because they are a proven benefit to
the system already and should not be penalized.
Response: We appreciate the commenters providing their
recommendations. We note that the suggestions are out-of-scope to
proposals described in the proposed rule (81 FR 28161) and iterate that
the statute only allows for limited exceptions for eligible clinicians
to be exempt from the MIPS requirements.
Comment: One commenter encouraged CMS to only use exceptions and
special cases as outlined in the proposed rule when absolutely
necessary because the creation of exceptions, exclusions, and multiple
performance pathways would introduce unnecessary reporting burden for
participating MIPS eligible clinicians.
Response: We thank the commenter for the suggestion and note that
in this final rule with comment period, we are finalizing our proposed
exclusions pertaining to new Medicare-enrolled eligible clinicians and
QPs and Partial QPs, and modifying our proposed exclusion pertaining to
the low-volume threshold, as discussed in sections II.E.3.a.,
II.E.3.b., and II.E.3.c., of this final rule with comment period.
After consideration of the public comments we received, we are
finalizing the definition of a new Medicare-enrolled eligible clinician
at Sec. 414.1305 as a professional who first becomes a Medicare-
enrolled eligible clinician within the PECOS during the performance
period for a year and had not previously submitted claims under
Medicare such as an individual, an entity, or a part of a physician
group or under a different billing number or tax identifier. We are
finalizing our proposal at Sec. 414.1310(c) that these eligible
clinicians will not be treated as a MIPS eligible clinician until the
subsequent year and the performance period for such subsequent year. As
outlined in section II.E.4. of this final rule with comment period, we
are finalizing a modification to the MIPS performance period to be a
minimum of one continuous 90-day period within CY 2017. In the case of
items and services furnished during a year by an individual who is not
a MIPS eligible clinician during the performance period, there will not
be a MIPS payment adjustment applied for that payment adjustment year.
Additionally, we are finalizing our proposal at Sec. 414.1310(d) that
in no case would a MIPS payment adjustment apply to the items and
services furnished during a year by new Medicare-enrolled eligible
clinicians for the applicable performance period.
We believe that it would be beneficial for eligible clinicians to
know during the performance period of a calendar year whether or not
they are identified as a new Medicare-enrolled eligible clinician. For
purposes of this section,
[[Page 77062]]
we are coining the term ``new Medicare-enrolled eligible clinician
determination period'' and define it to mean the 12 months of a
calendar year applicable to the performance period. During the new
Medicare-enrolled eligible clinician determination period, we will
conduct eligibility determinations on a quarterly basis to the extent
that is technically feasible in order to identify new Medicare-enrolled
eligible clinicians that would be excluded from the requirement to
participate in MIPS for the applicable performance period. Given that
the performance period is a minimum of one continuous 90-day period
within CY 2017, we believe it would be beneficial for such eligible
clinicians to be identified as being excluded from MIPS requirements on
a quarterly basis in order for individual eligible clinicians or groups
to plan and prepare accordingly. For future years of the MIPS program,
we will conduct similar eligibility determinations on a quarterly basis
during the new Medicare-enrolled eligible clinician determination
period, which consists of the 12 months of a calendar year applicable
to the performance period, in order to identify throughout the calendar
year eligible clinicians who would excluded from MIPS as a result of
first becoming new Medicare-enrolled eligible clinicians during the
performance period for a given year.
b. Qualifying APM Participant (QP) and Partial Qualifying APM
Participant (Partial QP)
Sections 1848(q)(1)(C)(ii)(I) and (II) of the Act provide that the
definition of a MIPS eligible clinician does not include, for a year,
an eligible clinician who is a Qualifying APM Participant (QP) (as
defined in section 1833(z)(2) of the Act) or a Partial Qualifying APM
Participant (Partial QP) (as defined in section 1848(q)(1)(C)(iii) of
the Act) who does not report on the applicable measures and activities
that are required under MIPS. Section II.F.5. of the proposed rule
provides detailed information on the determination of QPs and Partial
QPs.
We proposed that the definition of a MIPS eligible clinician at
Sec. 414.1310 does not include QPs (defined at Sec. 414.1305) and
Partial QPs (defined at Sec. 414.1305) who do not report on applicable
measures and activities that are required to be reported under MIPS for
any given performance period. Partial QPs will have the option to elect
whether or not to report under MIPS, which determines whether or not
they will be subject to MIPS payment adjustments. Please refer to the
section II.F.5.c. of the proposed rule where this election is discussed
in greater detail. We requested comments on this proposal.
The following is a summary of the comments we received regarding
our proposal that the definition of a MIPS eligible clinician does not
include QPs (defined at Sec. 414.1305) and Partial QPs (defined at
Sec. 414.1305) who do not report on applicable measures and activities
that are required to be reported under MIPS for any given performance
period, in which Partial QPs will have the option to elect whether or
not to report under MIPS.
Comment: One commenter recommended that CMS consider presumptive QP
status in the first performance year, and prospective notification of
QP status based on prior year thresholds. Alternatively, if in the year
following the performance year CMS determines the Advanced APM Entity
has not yet met the required threshold score, the commenter indicated
that CMS could either: Assign the entity's participating clinicians a
neutral MIPS score without a penalty or reward; or allow them to
complete two of the four MIPS performance categories in 2018 and have
the results count for 2019 payments.
Response: We refer readers to section II.F.5 of this final rule
with comment period for policies regarding QP and Partial QP
determinations.
After consideration of the public comments we received, we are
finalizing our proposal at Sec. 414.1305 that the definition of a MIPS
eligible clinician does not include QPs (defined at Sec. 414.1305) and
Partial QPs (defined at Sec. 414.1305) who do not report on applicable
measures and activities that are required to be reported under MIPS for
any given performance period in a year. Also, we are finalizing our
proposed policy at Sec. 414.1310(b) that for a year, QPs (defined at
Sec. 414.1305) and Partial QPs (defined at Sec. 414.1305) who do not
report on applicable measures and activities that are required to be
reported under MIPS for any given performance period in a year are
excluded from MIPS. Partial QPs will have the option to elect whether
or not to report under MIPS, which determines whether or not they will
be subject to MIPS payment adjustments.
c. Low-Volume Threshold
Section 1848(q)(1)(C)(ii)(III) of the Act provides that the
definition of a MIPS eligible clinician does not include MIPS eligible
clinicians who are below the low-volume threshold selected by the
Secretary under section 1848(q)(1)(C)(iv) of the Act for a given year.
Section 1848(q)(1)(C)(iv) of the Act requires the Secretary to select a
low-volume threshold to apply for the purposes of this exclusion which
may include one or more of the following: (1) The minimum number, as
determined by the Secretary, of Part B-enrolled individuals who are
treated by the MIPS eligible clinician for a particular performance
period; (2) the minimum number, as determined by the Secretary, of
items and services furnish to Part B-enrolled individuals by the MIPS
eligible clinician for a particular performance period; and (3) the
minimum amount, as determined by the Secretary, of allowed charges
billed by the MIPS eligible clinician for a particular performance
period.
We proposed at Sec. 414.1305 to define MIPS eligible clinicians or
groups who do not exceed the low-volume threshold as an individual MIPS
eligible clinician or group who, during the performance period, have
Medicare billing charges less than or equal to $10,000 and provides
care for 100 or fewer Part B-enrolled Medicare beneficiaries. We
believed this strategy holds more merit as it retains as MIPS eligible
clinicians those MIPS eligible clinicians who are treating relatively
few beneficiaries, but engage in resource intensive specialties, or
those treating many beneficiaries with relatively low-priced services.
By requiring both criteria to be met, we can meaningfully measure the
performance and drive quality improvement across the broadest range of
MIPS eligible clinician types and specialties. Conversely, it excludes
MIPS eligible clinicians who do not have a substantial quantity of
interactions with Medicare beneficiaries or furnish high cost services.
In developing this proposal, we considered using items and services
furnished to Part B-enrolled individuals by the MIPS eligible clinician
for a particular performance period rather than patients, but a review
of the data reflected there were nominal differences between the two
methods. We plan to monitor the proposed requirement and anticipate
that the specific thresholds will evolve over time. We requested
comments on this proposal including alternative patient threshold, case
thresholds, and dollar values.
The following is a summary of the comments we received regarding
our proposal to define MIPS eligible clinicians or groups who do not
exceed the low-volume threshold as an individual MIPS eligible
clinician or group who, during the performance period, have Medicare
billing charges less than or equal to $10,000 and provides care for 100
or fewer Part B-enrolled Medicare beneficiaries.
[[Page 77063]]
Comment: A few commenters supported the proposed policy to exempt
MIPS eligible clinicians or groups from MIPS requirements who do not
exceed the low-volume threshold of having Medicare billing charges less
than or equal to $10,000 and providing care for 100 or fewer Part B-
enrolled Medicare beneficiaries. In particular, one commenter expressed
support for the dual criteria of the low-volume threshold (Medicare
billing charges less than or equal to $10,000 and providing care for
100 or fewer Part B-enrolled Medicare beneficiaries).
Response: We appreciate the support from the commenters.
Comment: A significant portion of commenters expressed concern
regarding our proposed low-volume threshold provision, particularly the
requirement for MIPS eligible clinicians and groups to meet both the
low-volume threshold pertaining to the dollar value of Medicare billing
charges and the number of Medicare Part B beneficiaries cared for
during a performance period. The commenters requested that CMS modify
the criteria under the definition of MIPS eligible clinicians or groups
who do not exceed the low-volume threshold to require that an
individual MIPS eligible clinician or group would need to meet either
the low-volume threshold pertaining to the dollar value of Medicare
billing charges or the number of Medicare Part-B beneficiaries cared
for during a performance period in order to determine whether or not an
individual MIPS eligible clinician or group exceeds the low-volume
threshold. Several commenters noted that such a change would provide
greater flexibility for specialty clinicians.
Response: We appreciate the concerns expressed by commenters. We
agree with the commenters and have modified our proposal to not require
that MIPS eligible clinicians and groups must meet both the dollar
value of Medicare billing charges and the number of Medicare Part B
beneficiaries cared for during a performance period. Instead, we are
finalizing that individual MIPS eligible clinicians and groups meet
either the threshold of $30,000 in billed Medicare Part B allowed
charges or the threshold of 100 or fewer Part B-enrolled Medicare
beneficiaries. Also, we believe that the modified proposal reduces the
risk of clinicians withdrawing as Medicare suppliers and minimizing the
number of Medicare beneficiaries that they treat in a year. We will
monitor any effect on Medicare participation. Similar to the goal of
the proposed low-volume threshold, we believe that this modified
approach holds more merit as it retains as MIPS eligible clinicians
those MIPS eligible clinicians who are treating relatively few
beneficiaries, but engage in resource intensive specialties, or those
treating many beneficiaries with relatively low-priced services. We
believe that the modified proposal would also ensure that we can
meaningfully measure the performance and drive quality improvement
across a broad range of MIPS eligible clinician types and specialties.
We note that eligible clinicians who are excluded from the definition
of a MIPS eligible clinician under the low-volume threshold or another
applicable exclusion can still participate voluntarily in MIPS, but are
not subject to positive or negative MIPS adjustments. For future
consideration, we are seeking additional comment on possible ways that
excluded eligible clinicians might be able to opt-in to the MIPS
program (and the MIPS payment adjustment) in future years in a manner
consistent with the statute.
Comment: The majority of commenters recommended that CMS increase
the low-volume threshold. A signification portion of commenters
requested that MIPS eligible clinicians or groups who do not exceed the
low-volume threshold should have Medicare billing charges less than or
equal to $30,000 or provide care for 100 or fewer Part B-enrolled
Medicare beneficiaries. Many commenters noted that raising the low-
volume threshold would allow more physicians with a small number of
Medicare patients to be recognized as MIPS eligible clinicians or
groups who do not exceed the low-volume threshold, particularly MIPS
eligible clinicians providing specialty services or high risk services.
Several commenters indicated that women on Medicare receive expensive
surgical care from OB/GYNs, which could cause MIPS eligible clinicians
and groups to exceed the proposed low-volume threshold despite a very
small number of Medicare patients. The commenters suggested that CMS
exempt MIPS eligible clinicians and groups from the MIPS program who
have less than $30,000 in Medicare allowed charges per year or provide
care for fewer than 100 unique Medicare Part-B beneficiaries.
A few commenters indicated that an increase in the low-volume
threshold would mitigate an undue burden on small practices. One
commenter stated that RHCs and such clinicians will have fewer than
$10,000 in Medicare billing charges, but many of them will have more
than 100 Part B beneficiaries under their care. The commenter expressed
concern that RHCs may be burdened with MIPS requirements for a low
level of Part B claims and thus, may either face penalties or the cost
of implementing the MIPS requirements. A few commenters indicated that
the low-volume threshold should be high enough to exempt physicians who
have no possibility of a positive return on their investment in the
cost of reporting.
Other recommendations from commenters included the following: align
the patient cap with the CPC+ patient panel requirements, which would
increase the number of Medicare Part B beneficiaries cared for to 150
(and would prevent clinicians from having two different low-volume
thresholds within the same program); exclude groups from participation
in MIPS based on an aggregated threshold for the group with the rate of
$30,000 and 100 patients per clinician, in which a group of two
eligible clinicians would be excluded if charging under $60,000 and
caring for under 200 Medicare Part B-enrolled Medicare beneficiaries;
exempt MIPS eligible clinicians for the transition year of MIPS who
bill under Place of Service 20, which is the designation for a place
with the purpose of diagnosing and treating illness or injury for
unscheduled, ambulatory patients seeking immediate medical attention;
and exempt facilities operating in Frontier areas from MIPS
participation, at least until 2019 when the list of MIPS eligible
clinicians expands and additional MIPS eligible clinicians are able to
participate in MIPS.
There were other commenters who requested that the threshold
criteria regarding the dollar value of Medicare billed charges and the
number of Medicare Part B beneficiaries cared for be increased to the
following: $25,000 Medicare billed charges or 50 or 100 Part B
beneficiaries; $50,000 Medicare billed charges or 100 or 150 Part B
beneficiaries; $75,000 Medicare billed charges or 100 or 750 Part B
beneficiaries; $100,000 Medicare billed charges or 1000 Part B
beneficiaries; $250,000 Medicare billed charges or 150 Part B
beneficiaries; and $500,000 Medicare billed charges or 400 or 500 Part
B beneficiaries.
Several commenters requested that CMS temporarily increase the low-
volume threshold in order for small practices to not be immediately
impacted by the implementation of MIPS. One commenter suggested that
the threshold be increased to 250 unique Medicare patients and a total
Medicare billing not to exceed $200,000 for 5 years. Another commenter
recommended that CMS set the low-volume threshold in 2019 at $250,000
of
[[Page 77064]]
Medicare billing charges. The commenter explained that at such amount,
the avoided penalties at 4 percent would approximately equal the
$10,000 cost of reporting and below such amount, there would not likely
be a return that exceeds the costs of reporting. Below such amount, the
commenter suggested CMS make MIPS participation optional, but MIPS
eligible clinicians that participate would be exempt from any
penalties.
Response: We appreciate the concerns and recommendations provided
by the commenters. We received a range of suggestions and considered
the various options. We agree with commenters that the dollar value of
the low-volume threshold should be increased and that the low-volume
threshold should not require MIPS eligible clinicians and groups to be
required to meet both the dollar value of billed Medicare Part B
allowed charges and the Part B Medicare-enrolled beneficiary count
thresholds at this time. We believe it is important to establish a low-
volume threshold that is responsive to stakeholder feedback. Some of
the recommended options would have established a threshold that would
exclude many eligible clinicians who would otherwise want to
participate in MIPS. The majority of commenters suggested that the low-
volume threshold be changed to reflect $30,000 or less billed Medicare
Part B allowed charges. As a result, we are modifying our proposal. We
are defining MIPS eligible clinicians or groups who do not exceed the
low-volume threshold as an individual MIPS eligible clinician or group
who, during the low-volume threshold determination period, has billed
Medicare Part B allowed charges less than or equal to $30,000 or
provides care for 100 or fewer Part B-enrolled Medicare beneficiaries.
This policy would be more robust and effective at excluding clinicians
for whom submitting data to MIPS may represent a disproportionate
burden with a secondary effect of allowing greater concentration of
technical assistance on a smaller cohort of practices. We believe that
the higher low-volume threshold addresses the concerns from commenters
while remaining consistent with the proposal and having a policy that
is easy to understand.
Comment: A few commenters indicated that it would be difficult for
psychologists to determine ahead of time if they met the low-volume
threshold relating to the dollar value of $10,000 Medicare billing
charges in order to be exempt from MIPS, yet it would be relatively
easy for psychologists to determine whether they are likely to have
fewer than 100 Medicare patients in a given year based on their
historical volume of Medicare patients. Several commenters requested
CMS to change the low-volume threshold requirement to state ``$10,000
in Medicare charges or fewer than 100 beneficiaries,'' making it
possible for psychologists to be exempt from MIPS, which is essential
in keeping them enrolled in Medicare provider panels. A few commenters
expressed concerns that if the proposed low-volume threshold was
finalized as is, psychologists and psychotherapists who see Medicare
beneficiaries weekly or bi-weekly would be unable to meet Medicare
patients' demand for psychotherapy, would discontinue seeing Medicare
beneficiaries altogether, and would be reluctant to participate in MIPS
if they were not exempted from MIPS participation. Commenters stated
that CMS violates the Mental Health Parity and Addiction Equity Act of
2008 by having separate rules for medical versus psychological
illnesses.
Response: As previously noted, we are finalizing a modification to
proposal, in which we are defining MIPS eligible clinicians or groups
who do not exceed the low-volume threshold as an individual MIPS
eligible clinician or group who, during the performance period, has
billed Medicare Part B allowed charges less than or equal to $30,000 or
provides care for 100 or fewer Part B-enrolled Medicare beneficiaries.
Thus, a MIPS eligible clinician or a group would only need to meet the
dollar value or the beneficiary count for the low-volume threshold
exclusion. As a result, psychologists will be able to easily discern
whether or not they exceed the low-volume threshold. In addition, we
intend to provide a NPI level lookup feature prior to or shortly after
the start of the performance period that will allow clinicians to
determine if they do not exceed the low-volume threshold and are
therefore excluded from MIPS. More information on this NPI level lookup
feature will be made available at QualityPaymentProgram.cms.gov.
In regard to the comment pertaining to the Mental Health Parity and
Addiction Equity Act of 2008 (MHPAEA), we note that the MHPAEA
generally prevents group health plans and health insurance issuers that
provide mental health or substance use disorder benefits from imposing
less favorable benefit limitations on those benefits than on medical/
surgical benefits. The mental health parity requirements of MHPAEA do
not apply to Medicare.
Comment: One commenter indicated that the low-volume threshold is
too low for a group and requested that CMS either establish a certain
exclusion threshold based on group size, or exclude a group if more
than 50 percent of its MIPS eligible clinicians meet the low-volume
threshold. Another commenter recommended CMS to establish a low-volume
threshold based upon practice size, so that solo practices and those
with less than 10 clinicians are ineligible for MIPS. The commenter
noted that the financial and reporting burden of participating in MIPS
would be too great for such clinicians.
Response: We appreciate the concern and suggestions from the
commenters and note that we are modifying our proposed low-volume
threshold by increasing the dollar value of the billed Medicare Part B
allowed charges and eliminating the requirement that the clinician meet
both the dollar value and beneficiary count thresholds. MIPS eligible
clinicians or groups that do not exceed the low-volume threshold of
$30,000 billed Medicare Part B allowed charges or provide care for 100
or fewer Part B-enrolled Medicare beneficiaries would be excluded from
MIPS. We apply the same low-volume threshold to both individual MIPS
eligible clinicians and groups because groups have the option to elect
to report at an individual or group level. A group that would be
excluded from MIPS when reporting at a group level may find it
advantageous to report at the individual level.
Comment: One commenter suggested that CMS exclude Part B and Part D
drug costs from the low-volume threshold determination to mitigate the
impacts of MIPS on community practices in rural and underserved areas.
Response: We appreciate the suggestion from the commenter and note
that the low-volume threshold applies to Medicare Part B allowed
charges billed by the eligible clinician, such as those under the PFS.
Comment: One commenter stated that CMS should provide education and
training to MIPS eligible clinicians and groups meeting the low-volume
threshold.
Response: We are committed to actively engaging with all
stakeholders, including tribes and tribal officials, throughout the
process of establishing and implementing MIPS and using various means
to communicate and inform MIPS eligible clinicians and groups of the
MIPS requirements. In addition, we intend to provide a NPI level lookup
feature prior to or shortly after the start of the performance period
that will allow clinicians to determine
[[Page 77065]]
if they do not exceed the low-volume threshold and are therefore
excluded from MIPS. More information on this NPI level lookup feature
will be made available at QualityPaymentProgram.cms.gov.
Comment: One commenter requested that a definition of ``Medicare
billing charges'' be established under the low-volume threshold policy.
The commenter also requests a modification to this term so that it
reads ``allowed amount'' so that it is clear that the $10,000 threshold
is calculated based on $10,000 of Medicare-allowed services.
Response: We appreciate the suggestions from the commenter and note
that the low-volume threshold pertains to Medicare Part B allowed
charges billed by a MIPS eligible clinician, such as those under the
PFS. In order to be consistent with the statute, we assess the allowed
charges billed to determine whether or not an eligible clinician
exceeds the low-volume threshold. Also, we specify that the allowed
charges billed relate to Medicare Part B.
Comment: One commenter noted that since MIPS eligibility is based
on the current reporting period, a clinician would not definitively
know if he or she is excluded until the end of the year. It would be
helpful if eligibility would be based on a prior period, as is
currently done for hospital-based determinations for EPs under the EHR
Incentive Program. This is especially problematic for low-volume
clinicians such as OB/GYN, because eligibility might change from year
to year. Another commenter questioned why the low-volume threshold for
a MIPS eligible clinician is calculated based on the performance year
rather than basing the calculation on the previous year.
Response: We agree that it would be beneficial for individual
eligible clinicians and groups to know whether they are excluded under
the low-volume threshold prior to the start of the performance period
and thus, we are finalizing a modification to our proposal to allow us
to make eligibility determinations regarding low-volume status using
historical claims data. This modification will allow us to inform
individual MIPS eligible clinicians and groups of their low-volume
status prior to or shortly after the start of the performance period.
For purposes of this section, we are coining the term ``low-volume
threshold determination period'' to refer to the timeframe used to
assess claims data for making eligibility determinations for the low-
volume threshold exclusion. We define the low-volume threshold
determination period to mean a 24-month assessment period, which
includes a two-segment analysis of claims data during an initial 12-
month period prior to the performance period followed by another 12-
month period during the performance period. The initial 12-month
segment of the low-volume threshold determination period would span
from the last 4 months of a calendar year 2 years prior to the
performance period followed by the first 8 months of the next calendar
year and include a 60-day claims run out, which will allow us to inform
eligible clinicians and groups of their low-volume status during the
month (December) prior to the start of the performance period. To
conduct an analysis of the claims data regarding Medicare Part B
allowed charges billed prior to the performance period, we are
establishing an initial segment of the low-volume threshold
determination period consisting of 12 months. We believe that the
initial low-volume threshold determination period enables us to make
eligibility determinations based on 12 months of data that is as close
to the performance period as possible while informing eligible
clinicians of their low-volume threshold status prior to the
performance period. The second 12-month segment of the low-volume
threshold determination period would span from the last 4 months of a
calendar year 1 year prior to the performance period followed by the
first 8 months of the performance period in the next calendar year and
include a 60-day claims run out, which will allow us to inform
additional eligible clinicians and groups of their low-volume status
during the performance period.
Thus, for purposes of the 2019 MIPS payment adjustment, we will
initially identify the low-volume status of individual eligible
clinicians and groups based on 12 months of data starting from
September 1, 2015 to August 31, 2016, with a 60 day claims run out. To
account for the identification of additional individual eligible
clinicians and groups who do not exceed the low-volume threshold during
the 2017 performance period, we will conduct another eligibility
determination analysis based on 12 months of data starting from
September 1, 2016 to August 31, 2017, with a 60 day claims run out. For
example, MIPS eligible clinicians who may have exceeded the low-volume
threshold during the first determination assessment, but fall below the
threshold during the performance period because their practice changed
significantly, they changed practices from a prior year, etc.
In addition, we note that the low-volume threshold exclusion is
determined at the individual (TIN/NPI) level for individual reporting
and at the group (TIN) level for group reporting. An eligible clinician
may be identified as having a status that does not exceed the low-
volume threshold at the individual (TIN/NPI) level, but if such
eligible clinician is part of a group that is identified as having a
status exceeding the low-volume threshold, such eligible clinician
would be required to participate in MIPS as part of the group because
the low-volume threshold is determined at the group (TIN) level for
groups. For eligibility determinations pertaining to the low-volume
threshold exclusion, we will be conducting our analysis for each TIN/
NPI and TIN identified in the claims data and make a determination
based on the Medicare Part B allowed charges billed. Since we are
making eligibility determinations for each TIN/NPI and TIN identified
in the claims data, we do not need to know whether or not a group is
reporting at the individual or group level prior to our analyses. Thus,
groups can use the eligibility determinations we make for each TIN/NPI
and TIN to determine whether or not their group would be reporting at
the individual or group level. Subsequently, groups reporting at the
group level would need to meet the group requirements as discussed in
section II.E.3.d. of this final rule with comment period.
Comment: One commenter requested that CMS ensure that low-volume
threshold exclusion and other exclusions would not penalize practices
with more pediatric, women's health, Medicaid, or private insurance
patients.
Response: We recognize that groups will have different patient
populations. As previously noted, we are finalizing a modified low-
volume threshold policy that will increase the number of individual
eligible clinicians and groups excluded from the requirement to
participate in MIPS, which would include individual eligible clinicians
and groups with more pediatric, women's health, Medicaid, or private
insurance patients if they have not billed more than $30,000 of
Medicare Part B allowed charges or provided care for more than 100 Part
B-enrolled Medicare beneficiaries. We note that MIPS eligible
clinicians who are excluded from MIPS have the option to voluntarily
participate in MIPS, but would not receive a MIPS payment adjustment.
Comment: One commenter requested more information about whether the
low-volume threshold will be
[[Page 77066]]
eliminated in future years and if there is a potential for an incentive
payment when an eligible clinician meets the low-volume threshold but
elects to report anyway.
Response: We intend to monitor the low-volume threshold requirement
and anticipate that the specific threshold will evolve over time. For
eligible clinicians who do not exceed the low-volume threshold and are
thus excluded from MIPS, they could voluntarily participate in MIPS,
but would not be subject to the MIPS payment adjustment (positive or
negative).
Comment: A few commenters requested clarification on the definition
of the low-volume threshold including whether the $10,000 limit
pertains to all Medicare billing charges or solely Medicare Part B
charges, how this low-volume threshold applies to low-volume clinicians
practicing in and reporting as a group, how beneficiaries are
attributed to clinicians, and if there is a timeframe in which a
patient was last seen.
Response: We note that the dollar value of low-volume threshold
applies to Medicare Part B allowed charges billed by the eligible
clinician. We note that eligibility determinations regarding low-volume
threshold exclusion are based on claims data. As a result, we are able
to identify Medicare Part B allowed charges billed by the eligible
clinician and the number of Part B-enrolled Medicare beneficiaries
cared for by an eligible clinician during the first and second low-
volume threshold determination periods. For eligibility determinations
regarding the low-volume threshold exclusion, we do not consider the
timeframes of when a patient was last seen. In regard to how the low-
volume threshold applies to MIPS eligible clinicians in groups, we
apply the same low-volume threshold to both individual MIPS eligible
clinicians and groups since groups have the option to report at an
individual or group level. As a result of the low-volume threshold
exclusion being determined at the individual (TIN/NPI) level for
individual reporting and at the group (TIN) level for group reporting,
there will be some eligible clinicians with a low-volume status that
does not exceed the low-volume threshold who would be excluded from
MIPS at the individual (TIN/NPI) level, but if such eligible clinicians
are part of a group with a low-volume status that exceeds the low-
volume threshold, such eligible clinicians would be required to
participate in MIPS as part of the group. Section II.E.3.d. of this
final rule with comment period describes how a group's (TIN)
performance is assessed and scored at the group level and how the MIPS
payment adjustment is applied at the group level when a group includes
clinicians who are excluded from MIPS at the individual level.
Comment: Several commenters opposed holding individuals and groups
to the same low-volume threshold standards. One commenter stated that
basing the exclusion on two thresholds simultaneously would be
antithetical to measurements of quality based on outcomes. The
commenter noted that patient care can be very expensive and some
eligible clinicians could be denied the low-volume threshold exclusion
after seeing only a few very complex patients over the course of the
performance period. Another commenter indicated that the proposed
exclusionary criteria may lead to eligible clinicians in solo or small
practices withdrawing as Medicare suppliers, or limiting the number of
Medicare patients they treat over a performance period.
One commenter requested that CMS issue a clarification stating that
when clinicians choose to have their performance assessed at the group
level, the low-volume threshold would also be assessed at the group
level. This would ensure consistent treatment. Another commenter
requested clarity regarding the low-volume threshold exclusion
definition for groups, and recommended that CMS apply a multiplying
factor for each enrolled Medicare clinician in the group definition.
One commenter recommended that CMS scale the minimum number of Part B-
enrolled Medicare beneficiaries and Medicare billed charges to the
number of physician group members while another commenter requested
that if a practice reports as a group, the low-volume threshold should
be multiplied by the number of clinicians in the group. Commenters
recommended a higher threshold for groups.
A few commenters indicated that the current proposal does not
provide a meaningful exclusion for small and rural practices that
cannot afford the upfront investments (including investments in EHR
systems) and as a result of the high costs to report for small
practices, the threat of negative MIPS payment adjustments or low
positive MIPS payment adjustments that do not cover the costs to report
would deter small practices from participating in MIPS.
Response: We thank the commenters for their concerns and
recommendations regarding the low-volume threshold. We recognize that
the low-volume threshold proposed in section II.E.3.c. of the proposed
rule (81 FR 28178) is a concern and as previously noted, we are
modifying our proposal by increasing the dollar value of the billed
Medicare Part B allowed charges and eliminating the requirement for
MIPS eligible clinicians and groups to meet both the dollar value
threshold and the 100 beneficiary count. In this final rule with
comment period, we continue to apply the same low-volume threshold for
both individual MIPS eligible clinicians and groups. We disagree with
the comment regarding a percentage-based approach for groups because
groups have the option of electing to report at an individual or group
level. If a group elects not to report as a group, then each MIPS
eligible clinician would report individually.
In addition, we believe that the modified proposal reduces the risk
of clinicians withdrawing as Medicare suppliers and minimizing the
number of Medicare beneficiaries that they treat in a year. We will
monitor any effect on Medicare participation in CY 2017 and future
calendar years.
Comment: Several commenters expressed concern that clinicians
working in solo practices or small groups, especially in rural areas
and HPSAs, would have difficulty meeting the requirements for MIPS. One
commenter noted that non-board-certified doctors often work in these
areas and are reimbursed at a lower rate than board-certified doctors.
The commenters recommended that CMS make similar concessions for this
category of clinicians as it proposed to do for non-patient facing MIPS
eligible clinicians in the proposed rule. One commenter requested that
small practice physicians and solo physicians in HPSAs be exempt from
MIPS. The commenters requested that CMS ensure that small and solo
practices have an equal opportunity to participate successfully in MIPS
and Advanced APMs.
Response: We appreciate the concerns expressed by commenters and
recognize that certain individual MIPS eligible clinicians and groups
may only be able to report on a few, or possibly no, applicable
measures and activities for the MIPS requirements. In section
II.E.6.b.(2) of this final rule with comment period, we describe the
re-weighting of each performance category when there are not sufficient
measures and activities that are applicable and available. Also, our
modified low-volume threshold exclusion policy increases the dollar
value of Medicare Part B allowed charges billed by an eligible
clinician, which will increase the number of eligible clinicians and
groups excluded from MIPS and not subject to a negative MIPS payment
[[Page 77067]]
adjustment, which may include additional solo or small rural or HPSA
practices. We believe that rural areas, small practices, and HPSAs will
benefit from other policies that we are finalizing throughout this
final rule with comment period such as lower reporting requirements and
lower performance threshold.
Comment: One commenter expressed concern that the MIPS program as
outlined in the proposed rule would limit referrals to necessarily
higher-cost small and rural providers. The commenter indicated that
comparisons between small, rural practices and larger practices does
not take into account differences in infrastructure and technological
capabilities and patient populations which the commenter believed are
more likely to be sick and poor in the rural settings. Another
commenter expressed concern that rural clinicians who serve
impoverished communities and do not have additional resources (for
example, dieticians who can provide more hands-on care for diabetic
patients) would be unfairly penalized if their patients do not comply
with medical advice.
Response: We appreciate the concern expressed by the commenter and
recognize that groups vary in size, clinician composition, patient
population, resources, technological capabilities, geographic location,
and other characteristics. While we believe the MIPS measures are valid
and reliable, we will continue to investigate methods to ensure all
clinicians are treated as fairly as possible within MIPS. As noted in
this final rule with comment period, the Secretary is required to take
into account the relevant studies conducted and recommendations made in
reports under section 2(d) of the Improving Medicare Post-Acute
Transformation (IMPACT) Act of 2014. Under the IMPACT Act, the Office
of the Assistant Secretary for Planning and Evaluation (ASPE) has been
conducting studies on the issue of risk adjustment for sociodemographic
factors on quality measures and cost, as well as other strategies for
including social determinants of health status evaluation in CMS
programs. We will closely examine the ASPE studies when they are
available and incorporate findings as feasible and appropriate through
future rulemaking. Also, we will monitor outcomes of beneficiaries with
social risk factors, as well as the performance of the MIPS eligible
clinicians who care for them to assess for potential unintended
consequences such as penalties for factors outside the control of
clinicians. We believe that rural clinicians and practices will benefit
from other policies that we are finalizing throughout this final rule
with comment period such as lower reporting requirements and lower
performance threshold.
Comment: One commenter requested clarification as to whether or not
non-patient facing MIPS eligible clinicians who are not based in a
rural practice or not a member of a FQHC, but see fewer than 25
patients, would be exempt from MIPS. Another commenter requested
clarification regarding whether or not the low-volume threshold applies
if a physical therapist, occupational therapist, or speech-language
pathologist is institution-based or nursing home-based.
Response: In both situations that the commenter raises, the
clinician would be excluded from MIPS, however they would be excluded
for different reasons. For the first example, the non-patient facing
MIPS eligible clinician would be excluded due to seeing fewer than 25
patients, which falls below our finalized low-volume threshold
exclusion. For the second example, the physical therapists,
occupational therapists, or speech-language pathologist cannot be
considered MIPS eligible clinicians until as early as the third year of
the MIPS program.
Comment: One commenter proposed a phase-in period for small
practices in addition to an increased low-volume threshold because the
proposed rule did not immediately allow the opportunity for virtual
groups that could provide the infrastructure to assist small practices.
Additionally, the commenter believed that most small practices and solo
physicians would not be ready to report on January 1, 2017. The
commenter's recommended phase-in period would exempt the 40th
percentile of all small and rural practices in each specialty in year
1; the 30th percentile of all small and rural practices in each
specialty in year 2; the 20th percentile of all small and rural
practices in each specialty in year 3; and the 10th percentile of all
small and rural practices in each specialty in year 4. The commenter's
recommended phase-in would be voluntary, and they believe it would
provide more time for resource-limited small practices to prepare,
finance new systems and upgrades, change workflows, and transition to
MIPS.
Response: We appreciate the concerns and recommendations provided
by the commenter. We recognize that small and rural practices may not
have experience using CEHRT and/or may not be prepared to meet the MIPS
requirements for each performance category. As described in this
section of the final rule with comment period, we are modifying our
proposal by increasing the dollar value of billed Medicare Part B
allowed charges and eliminating the requirement for MIPS eligible
clinicians and groups to meet both the dollar value threshold and the
100 beneficiary count, in which groups not exceeding the low-volume
threshold would be excluded from the MIPS requirements. We believe our
modified low-volume threshold is less complex with potentially a
singular parameter determining low-volume status and addresses the
commenter's concerns by providing exclusions for more individual MIPS
eligible clinicians and groups, including small and rural practices.
Also, in section II.E.5.g.(8)(a) of this final rule with comment
period, we describe our final policies regarding the re-weighting of
the advancing care information performance category within the final
score, in which we would assign a weight of zero when there are not
sufficient measures applicable and available.
Comment: A few commenters expressed concern that the proposed rule
favored large practices, and requested that group practices with fewer
than 10 or 15 physicians be excluded from MIPS. One commenter
recommended that it may be more beneficial to expand the exclusion to
practices under 15 physicians, thus reducing the number of
practitioners that are going to opt out of Medicare altogether
following MACRA and retaining a fairer adjustment distribution among
the moderate and large practices.
Response: We thank the commenters for expressing their concerns and
note that we are modifying our proposed low-volume threshold to apply
to an individual MIPS eligible clinician or group who, during the low-
volume threshold determination period, has billed Medicare Part B
allowed charges less than or equal to $30,000 or provides care for 100
or few Part B-enrolled Medicare beneficiaries. We believe our modified
proposal would increase the number of groups excluded from
participating in MIPS based on the low-volume threshold, including
group practices with fewer than 10 or 15 clinicians.
Comment: One commenter requested that CMS provide the underlying
data that shows the distribution of spending and volume of cases on
which the low-volume threshold is based. The commenter expressed
concern that if the low-volume threshold is set too low, it may place
too many clinicians close to the minimum of 20 attributable cases for
resource use, which lacks statistical
[[Page 77068]]
robustness. Another commenter suggested that CMS increase the low-
volume threshold, as the commenter believed that counties with skewed
demographics will give clinicians no chance to avoid negative MIPS
payment adjustments. The commenter requested a moratorium on the
implementation of MIPS until a study can be done that examines the
potential effects of the law in such counties or for CMS to exempt
practices that have a patient-population with more than 30 percent of
its furnished services provided to Medicare Part B beneficiaries until
the effects of the law are studied on the impact to these groups.
Response: We appreciate the concerns expressed by commenters
regarding the proposed low-volume threshold and intend to monitor the
effects of the low-volume threshold and anticipate that the specific
thresholds will evolve over time. In this section of the final rule
with comment period, we are modifying our proposed low-volume
threshold, in which we are defining MIPS eligible clinicians or groups
that do not exceed the low-volume threshold as an individual MIPS
eligible clinician or group who, during the low-volume threshold
determination period, has billed Medicare Part B allowed charges less
than or equal to $30,000 or see fewer than 100 beneficiaries. In regard
to the commenter's concern on having too many MIPS eligible clinicians
near the minimum number of attributable cases for the cost performance
category; we believe the increased low-volume threshold policy would
reduce such risk and ensure statistical robustness. We also note that
we have made a number of modifications within the cost performance
category and refer readers to section II.E.5.e. of this final rule with
comment period for the discussion of our modified policies.
Comment: One commenter requested that CMS calculate the projected
data collection and reporting costs, the number of cases necessary to
achieve statistical significance or reliability and comparison
purposes, and the administrative costs on the agency to manage and
calculate MIPS scores. With such costs in mind, the commenter requested
that CMS adjust the low-volume threshold to a level such that MIPS
would only apply to eligible clinicians for whom the costs of
participating in the MIPS program outweighed the costs of refusing to
accept Medicare patients. Otherwise, commenter was concerned that solo
practitioners and small practices would opt out of treating Medicare
patients.
Response: We thank the commenter for their suggestions and note
that we are modifying our proposed low-volume threshold by increasing
the dollar value of billed Medicare Part B allowed charges and
eliminating the requirement for MIPS eligible clinicians and groups to
meet both the dollar value threshold and the 100 beneficiary count. We
believe our modified proposal would increase the number of groups
excluded from participating in MIPS based on the low-volume threshold
and prevent the low-volume threshold from being a potential factor that
could influence a MIPS eligible clinician's decision to deny access to
care for Medicare Part B beneficiaries or opt out of treating Medicare
Part B beneficiaries. We refer readers to section III.B. of this final
rule with comment period for our discussion regarding burden reduction.
Comment: For those eligible clinicians not participating in an ACO,
one commenter requested clarification on the proposed $10,000
threshold, specifically, whether this includes payments made under the
RHC all-inclusive rate (AIR) or FQHC prospective payment system. The
commenter suggested that the $10,000 threshold should only include Part
B PFS allowed charges because the other payment methodologies already
are alternatives to fee schedules.
Response: In this section of the final rule with comment period, we
are modifying our proposed low-volume threshold to be based on a dollar
value of $30,000 of billed Medicare Part B allowed charges during a
performance period or 100 Part B-enrolled beneficiary count, which
would apply to clinicians in RHCs and FQHCs with billed Medicare Part B
allowed charges.
Comment: A few commenters requested clarification on the low-volume
threshold for clinicians who change positions frequently or work as
locum tenens. The commenters requested CMS to clarify whether or not
the threshold would be cumulative for these clinicians throughout the
year as they bill under different TINs, or whether the threshold be
specific to a TIN/NPI combination. Commenters recommended that the low-
volume threshold be for a specific TIN in which a clinician may work.
Response: In sections II.E.2.a. and II.E.2.b. of this final rule
with comment period, we describe the identifiers for MIPS eligible
clinicians participating in MIPS at the individual or group level. For
MIPS eligible clinicians reporting as individuals, we use a combination
of billing TIN/NPI as the identifier to assess performance. In order to
determine the low-volume status of eligible clinicians reporting
individually, we will calculate the low-volume threshold for each TIN/
NPI combination. For individual MIPS eligible clinicians billing under
multiple TINs, the low-volume threshold is calculated for each TIN/NPI
combination. In the case of an individual eligible clinician exceeding
the low-volume threshold under any TIN/NPI combination, the eligible
clinician would be considered a MIPS eligible clinician and required to
meet the MIPS requirements for those TIN/NPI combinations.
Comment: One commenter suggested that CMS develop a MIPS hardship
exception in addition to a low-volume threshold.
Response: We thank the commenter for the suggestion. We note that
the section II.E.5.g.(8)(a)(ii) of this final rule with comment period
describes our final policies regarding the re-weighting of the
advancing care information performance category within the final score,
in which we would assign a weight of zero when there are not sufficient
measures applicable and available for MIPS eligible clinicians facing a
significant hardship.
Comment: One commenter stated that the low-volume threshold should
also take into account total Medicare patients and billing, including
Medicare Advantage enrollees, not just Part B.
Response: We appreciate the suggestion from the commenter, but note
that section 1848(q)(1)(C)(iv) of the Act establishes provisions
relating to the low-volume threshold, in which the low-volume threshold
only pertains to the number of Part B-enrolled Medicare beneficiaries,
the number of items and services furnished to such individuals, or the
amount of allowed charges billed under Part B. To the extent that
Medicare Part B allowed charges are incurred for beneficiaries enrolled
in section 1833(a)(1)(A) or 1876 Cost Plans, those the Medicare
beneficiaries would be included in the beneficiary count; however,
beneficiaries enrolled in Medicare Advantage plans that receive their
Part B services through their Medicare Advantage plan will not be
included in allowed charges billed under Medicare Part B for
determining the low-volume threshold.
Comment: Regarding partial year performance data, one commenter
indicated that the low-volume reporting threshold and ``insufficient
sample size'' standard already proposed for MIPS are adequate, and no
additional ``partial year'' criteria would be needed. For example, a
clinician who only began billing Medicare in November and did not meet
the low-volume threshold would not be eligible for MIPS. Another
clinician who began billing Medicare in
[[Page 77069]]
November who exceeds the low-volume threshold, even in such a short
time period, would be eligible for MIPS. The commenter supported this
approach because it is simple and straightforward and does not require
any additional calculations.
Response: We appreciate the support from the commenter.
Comment: One commenter requested that CMS provide an exemption for
physicians over 60 or 65 years old as they cannot afford to implement
the necessary changes, particularly if they are working part-time.
Response: We appreciate the concerns expressed by the commenter and
note that all MIPS eligible clinicians (as defined in section 1861(r)
of the Act) practicing either full-time or part-time are required to
participate in MIPS unless determined eligible for an exclusion. A MIPS
eligible clinician, whether practicing full-time or part-time, who does
not exceed the low-volume threshold would be excluded from
participating in MIPS.
After consideration of the public comments we received, we are
finalizing a modification to our proposal to define MIPS eligible
clinicians or groups who do not exceed the low-volume threshold. At
Sec. 414.1305, we are defining MIPS eligible clinicians or groups who
do not exceed the low-volume threshold as an individual MIPS eligible
clinician or group who, during the low-volume threshold determination
period, has Medicare Part B billing charges less than or equal to
$30,000 or provides care for 100 or fewer Part B-enrolled Medicare
beneficiaries. We are finalizing our proposed policy at Sec.
414.1310(b) that for a year, MIPS eligible clinicians who do not exceed
the low-volume threshold (as defined at Sec. 414.1305) are excluded
from MIPS for the performance period with respect to a year. The low-
volume threshold also applies to MIPS eligible clinicians who practice
in APMs under the APM scoring standard at the APM Entity level, in
which APM Entities that do not exceed the low-volume threshold would be
excluded from the MIPS requirements and not subject to a MIPS payment
adjustment. Such an exclusion will not affect an APM Entity's QP
determination if the APM Entity is an Advanced APM. Additionally,
because we agree that it would be beneficial for individual eligible
clinicians and groups to know whether they are excluded under the low-
volume threshold prior to the start of the performance period, we are
finalizing a modification to our proposal to allow us to make
eligibility determinations regarding low-volume status using historical
data. This modification will allow us to inform individual MIPS
eligible clinicians and groups of their low-volume status prior to the
performance period. We establish the low-volume threshold determination
period to refer to the timeframe used to assess claims data for making
eligibility determinations for the low-volume threshold exclusion. We
define the low-volume threshold determination period to mean a 24-month
assessment period, which includes a two-segment analysis of claims data
during an initial 12-month period prior to the performance period
followed by another 12-month period during the performance period. In
order to conduct an analysis of the data prior to the performance
period, we are establishing an initial low-volume threshold
determination period consisting of 12 months. The initial 12-month
segment of the low-volume threshold determination period would span
from the last 4 months of a calendar year 2 years prior to the
performance period followed by the first 8 months of the next calendar
year and include a 60-day claims run out, which will allow us to inform
eligible clinicians and groups of their low-volume status during the
month (December) prior to the start of the performance period. The
second 12-month segment of the low-volume threshold determination
period would span from the last 4 months of a calendar year 1 year
prior to the performance period followed by the first 8 months of the
performance period in the next calendar year and include a 60-day
claims run out, which will allow us to inform additional eligible
clinicians and groups of their low-volume status during the performance
period.
Thus, for purposes of the 2019 MIPS payment adjustment, we will
initially identify the low-volume status of individual eligible
clinicians and groups based on 12 months of data starting from
September 1, 2015 to August 31, 2016. In order to account for the
identification of additional individual eligible clinicians and groups
that do not exceed the low-volume threshold during the 2017 performance
period, we will conduct another eligibility determination analysis
based on 12 months of data starting from September 1, 2016 to August
31, 2017. For example, eligible clinicians who may have exceeded the
low-volume threshold during the first determination assessment, but
fall below the threshold during the performance period because their
practice changed significantly, they changed practices from a prior
year, etc. Similarly, for future years, we will conduct an initial
eligibility determination analysis based on 12 months of data
(consisting of the last 4 months of the calendar year 2 years prior to
the performance period and the first 8 months of the calendar year
prior to the performance period) to determine the low-volume status of
individual eligible clinicians and groups, and conduct another
eligibility determination analysis based on 12 months of data
(consisting of the last 4 months of the calendar year prior to the
performance period and the first 8 months of the performance period) to
determine the low-volume status of additional individual MIPS eligible
clinicians and groups. We will not change the low-volume status of any
individual eligible clinician or group identified as not exceeding the
low-volume threshold during the first eligibility determination
analysis based on the second eligibility determination analysis. Thus,
an individual eligible clinician or group that is identified as not
exceeding the low-volume threshold during the first eligibility
determination analysis will continue to be excluded from MIPS for the
duration of the performance period regardless of the results of the
second eligibility determination analysis. We will conduct the second
eligibility determination analysis to account for the identification of
additional, previously unidentified individual eligible clinicians and
groups who do not exceed the low-volume threshold.
We recognize that the low-volume threshold determination period
effectively combines two 12-month segments from 2 consecutive calendar
years, in which the two 12-month periods of data that would be used for
our analysis will not align with the calendar years. Also, we note that
the low-volume threshold determination period may impact new Medicare-
enrolled eligible clinicians who are excluded from MIPS participation
for the performance period in which they are identified as new
Medicare-enrolled eligible clinicians. Such clinicians would ordinarily
begin participating in MIPS in the subsequent year, but under our
modified low-volume threshold, are more likely to be excluded for a
second year. The low-volume threshold exclusion may apply if, for
example, such eligible clinician became a new Medicare-enrolled
eligible clinician during the last 4 months of the calendar year and
did not exceed the low-volume threshold of billed Medicare Part B
allowed charges. Since the initial eligibility determination period
consists of the last 4 months of the calendar year 2 years prior to the
performance period
[[Page 77070]]
and the first 8 months of the calendar year prior to the performance
period, these new Medicare-enrolled eligible clinicians could be
identified as having a low-volume status if the analysis reflects
billed Medicare Part B allowed charges less than $30,000 or the
provided care for 100 or fewer Part B-enrolled Medicare beneficiaries.
As noted above, we will not change the low-volume status of any
individual MIPS eligible clinician or group identified as not exceeding
the low-volume threshold during the first eligibility determination
analysis based on the second eligibility determination analysis.
d. Group Reporting
(1) Background
As noted in section II.E.1.e. of the proposed rule (81 FR 28176),
section 1848(q)(1)(D) of the Act, requires the Secretary to establish
and apply a process that includes features of the PQRS group practice
reporting option (GPRO) established under section 1848(m)(3)(C) of the
Act for MIPS eligible clinicians in a group for the purpose of
assessing performance in the quality category and gives the Secretary
the discretion to do so for the other performance categories. The
process established for purposes of MIPS must, to the extent
practicable, reflect the range of items and services furnished by the
MIPS eligible clinicians in the group. We believe this means that the
process established for purposes of MIPS should, to the extent
practicable, encompass elements that enable MIPS eligible clinicians in
a group to meet reporting requirements that reflect the range of items
and services furnished by the MIPS eligible clinicians in the group. At
Sec. 414.1310(e), we proposed requirements for groups. For purposes of
section 1848(q)(1)(D) of the Act, at Sec. 414.1310(e)(1) we proposed
the following way for individual MIPS eligible clinicians to have their
performance assessed as a group: As part of a single TIN associated
with two or more MIPS eligible clinicians, as identified by a NPI, that
have their Medicare billing rights reassigned to the TIN (as discussed
further in section II.E.2.b. of the proposed rule).
To have its performance assessed as a group, at Sec.
414.1310(e)(2), we proposed a group must meet the proposed definition
of a group at all times during the performance period for the MIPS
payment year. Additionally, at Sec. 414.1310(e)(3) we proposed in
order to have their performance assessed as a group, individual MIPS
eligible clinicians within a group must aggregate their performance
data across the TIN. At Sec. 414.1310(e)(3), we proposed that a group
electing to have its performance assessed as a group would be assessed
as a group across all four MIPS performance categories. For example, if
a group submits data for the quality performance category as a group,
CMS would assess them as a group for the remaining three performance
categories. We solicited public comments on the proposal regarding how
groups will be assessed under MIPS.
The following is a summary of the comments we received regarding
our proposed requirements for groups, including: Individual MIPS
eligible clinicians would have their performance assessed as a group as
part of a single TIN associated with two or more MIPS eligible
clinicians, as identified by a NPI, that have their Medicare billing
rights reassigned to the TIN; a group must meet the definition of a
group at all times during the performance period for the MIPS payment
year; individual MIPS eligible clinicians within a group must aggregate
their performance data across the TIN in order for their performance to
be assessed as a group; and a group that elects to have its performance
assessed as a group would be assessed as a group across all four MIPS
performance categories.
Comment: The majority of commenters were supportive of the proposed
group requirements. In particular, several commenters supported our
proposal to allow MIPS eligible clinicians to report across the four
performance categories at an individual or group level. The commenters
also expressed support for the way in which we would assess group
performance.
Response: We appreciate the support from commenters.
Comment: One commenter supported CMS' recognition that MIPS
eligible clinicians may practice in multiple settings and proposal to
allow such MIPS eligible clinicians to be measured as individuals or
through a group's performance.
Response: We appreciate the support from the commenter.
Comment: A few commenters recommended that CMS consider allowing
for greater flexibility in the reporting requirements and allow MIPS
eligible clinicians to participate either individually or as a group
for each of the four performance categories, as it may be reasonable to
report individually for some categories and as a group for other
categories. One commenter indicated that reporting for the advancing
care information measures via a group would be a helpful option, but
there are hurdles clinicians and health IT vendors and developers may
need to overcome during the first 2 years to do so.
Response: We appreciate the feedback from the commenters. While we
want to ensure that there is as much flexibility as possible within the
MIPS program, we believe it is important that MIPS eligible clinicians
choose how they will participate in MIPS as a whole, either as an
individual or as a group. Whether MIPS eligible clinicians participate
in MIPS as an individual or group, it is critical for us to assess the
performance of individual MIPS eligible clinicians or groups across the
four performance categories collectively as either an individual or
group in order for the final score to reflect performance at a true
individual or group level and to ensure the comparability of data.
Section II.E.5.g.(5)(c) of this final rule with comment period
describes group reporting requirements pertaining to the advancing care
information performance category.
Comment: A few commenters indicated that group reporting can be
challenging if the group includes part-time clinicians.
Response: We recognize that group-level reporting offers different
advantages and disadvantages to different practices and therefore, it
may not be the best option for all MIPS eligible clinicians who are
part of a particular group. Depending on the composition of a group,
which may include part-time clinicians, some groups may find meeting
the MIPS requirements to be less burdensome if they report at the
individual level rather than at the group level. Also, we note that
some part-time clinicians may be excluded from MIPS participation at
the individual level if they do not exceed the low-volume threshold
(section II.E.3.c. of this final rule with comment period describes the
low-volume threshold exclusion).
Comment: One commenter requested clarification regarding whether or
not clinicians excluded from MIPS would also be excluded from group-
level reporting.
Response: With clinician practices having the option to report at
the individual (TIN/NPI) or group level (TIN), we elaborate on how a
MIPS group's (TIN) performance is assessed and scored at the group
level and how the MIPS payment adjustment is applied at the group level
when a group includes clinicians who are excluded from MIPS at the
individual level. We note that there are three types of MIPS
exclusions: New Medicare-enrolled eligible clinicians, QPs and Partial
QPs
[[Page 77071]]
who do not report on applicable MIPS measures and activities, and
eligible clinicians who do not exceed the low-volume threshold (see
section II.E.3. of this final rule with comment period), which
determine when an eligible clinician is not considered a MIPS eligible
clinician and thus, not required to participate in MIPS. The two types
of exclusions pertaining to new Medicare-enrolled eligible clinicians,
and QPs and Partial QPs who do not report on applicable MIPS measures
and activities are determined at the individual (NPI) level while the
low-volume threshold exclusion is determined at the individual (TIN/
NPI) level for individual reporting and at the group (TIN) level for
group reporting.
A group electing to submit data at the group level would have its
performance assessed and scored across the TIN, which could include
items and services furnished by individual NPIs within the TIN who are
not required to participate in MIPS. For example, excluded eligible
clinicians (new Medicare-enrolled, QPs, or Partial QPs who do not
report on applicable MIPS measures and activities, and do not exceed
the low-volume threshold) are part of the group, and therefore, would
be considered in the group's score. However, the MIPS payment
adjustment would apply differently at the group level in relation to
each exclusion circumstance. For example, groups reporting at the group
level that include new Medicare-enrolled eligible clinicians, or QPs or
Partial QPs would have the MIPS payment adjustment only apply to the
Medicare Part B allowed charges pertaining to the group's MIPS eligible
clinicians and the MIPS payment adjustment would not apply to such
clinicians excluded from MIPS based on these two types of exclusions.
We reiterate that any individual (NPI) excluded from MIPS because they
are identified as new Medicare-enrolled, QP, or Partial QP would not
receive a MIPS payment adjustment, regardless of their MIPS
participation.
We note that the low-volume threshold is different from the other
two exclusions in that it is not determined solely based on the
individual NPI status, it is based on both the TIN/NPI (to determine an
exclusion at the individual level) and TIN (to determine an exclusion
at the group level) status. In regard to group-level reporting, the
group, as a whole, is assessed to determine if the group (TIN) exceeds
the low-volume threshold. Thus, eligible clinicians (TIN/NPI) who do
not exceed the low-volume threshold at the individual reporting level
and would otherwise be excluded from MIPS participation at the
individual level, would be required to participate in MIPS at the group
level if such eligible clinicians are part of a group reporting at the
group level that exceeds the low-volume threshold.
We considered aligning how the MIPS exclusions would be applied at
the group level for each of the three exclusion circumstances. We
recognize that alignment would provide a uniform application across the
three exclusions and offer simplicity, but we also believe it is
critical to ensure that there are opportunities encouraging
coordination, teamwork, and shared responsibility within groups. In
order to encourage coordination, teamwork, and shared responsibility at
the group level, we will assess the low-volume threshold so that all
clinicians within the group have the same status: All clinicians
collectively exceed the low-volume threshold or they do not exceed the
low-volume threshold.
In addition, we recognize that individual clinicians who do not
meet the definition of a MIPS eligible clinician during the first 2
years of MIPS such as physical and occupational therapists, clinical
social workers, and others are not MIPS eligible. Thus, such clinicians
are not required to participate in MIPS, but may voluntarily report
measures and activities for MIPS. For those clinicians not MIPS
eligible who voluntarily report for MIPS, they would not receive a MIPS
payment adjustment. Accordingly, groups reporting at the group level
may voluntarily include such eligible clinicians in its aggregated data
that would be reported for measure and activities under MIPS. For
groups reporting at the group level that voluntarily include eligible
clinicians who do not meet the definition of a MIPS eligible clinician,
they would have their performance assessed and scored across the TIN,
but those clinicians would not receive a MIPS payment adjustment,
regardless of their MIPS voluntary participation. We further note that
these clinicians who are not eligible for MIPS, but volunteer to
report, would not receive a MIPS payment adjustment.
We are finalizing our proposals regarding group requirements;
however, we welcome additional comment on: How we are applying the
application of group-related policies pertaining to group-level
performance assessment and scoring and the MIPS payment adjustment to
groups with eligible clinicians excluded from MIPS based on the three
exclusions or not MIPS eligible for the first 2 years of MIPS; the
advantages and disadvantages of how we are applying the application of
group-related policies when groups include eligible clinicians excluded
from the requirement to participate in MIPS at the individual level;
and alternative approaches that could be considered.
Comment: One commenter expressed concerns that group reporting
benchmarks and comparison groups have not yet been identified.
Response: All MIPS eligible clinicians, regardless of specialty,
geographic location, or whether they report as an individual or group,
who submit data using the same submission mechanism would be included
in the same benchmark. We refer readers to sections II.E.6.a.(2)(a) and
II.E.6.a.(3)(a) of this final rule with comment period for further
discussion of policies regarding quality measure and cost measure
benchmarks under MIPS.
Comment: One commenter requested clarification regarding group
reporting for organizations with multiple practices/specialties.
Response: As proposed, group reporting would occur and be
aggregated at the TIN level. No distinct reporting occurs at the
specialty or practice site level.
Comment: One commenter requested clarification on what can be
expected under MIPS by small practices for which measures are not
applicable.
Response: In section II.E.6.b.(2)(b) of this final rule with
comment period, we describe our scoring methodology that is applied
when there are a few or no applicable measures under the quality
performance category for MIPS eligible clinicians or groups to report.
Comment: One commenter recommended that CMS focus regulations on
large systems and practices and have fewer regulations for small
practices.
Response: We believe that it is essential for our requirements
pertaining to group-level reporting should be applicable to all groups
regardless of size, geographic location, composition, or other
differentiating factors. However, we believe that there are
circumstances in which our policies should consider how different types
of groups would be affected. In this final rule with comment period, we
establish an exclusion for individual MIPS eligible clinicians and
groups who do not exceed a low-volume threshold pertaining to a dollar
value of Medicare Part B allowed charges or a Part B-enrolled
beneficiary count. Also, we finalize our proposal relating to MIPS
eligible clinicians practicing RHCs and FQHCs, in which services
rendered by an eligible clinician that are payable under the RHC or
FQHC methodology
[[Page 77072]]
would not be subject to the MIPS payments adjustments.
After consideration of the public comments we received, we are
finalizing a modification to the following proposed policy:
Individual MIPS eligible clinicians who choose to report
as a group will have their performance assessed as part of a single TIN
associated with two or more eligible clinicians (including at least one
MIPS eligible clinician), as identified by a NPI, that have their
Medicare billing rights reassigned to the TIN (Sec. 414.1310(e)(1)).
In addition, we are finalizing the following policies:
A group must meet the definition of a group at all times
during the performance period for the MIPS payment year in order to
have its performance to be assessed as a group (Sec. 414.1310(e)(2)).
Eligible clinicians and MIPS eligible clinicians within a
group must aggregate their performance data across the TIN in order for
their performance to be assessed as a group (Sec. 414.1310(e)(3)).
A group that elects to have its performance assessed as a
group will be assessed as a group across all four MIPS performance
categories (Sec. 414.1310(e)(4)).
(2) Registration
Under the PQRS, groups are required to complete a registration
process to participate in PQRS as a group. During the implementation
and administration of PQRS, we received feedback from stakeholders
regarding the registration process for the various methods available
for data submission. Stakeholders indicated that the registration
process was burdensome and confusing. Additionally, we discovered that
during the registration process when groups are required to select
their group submission mechanism, groups sometimes selected the option
not applicable to their group, which has created issues surrounding the
mismatch of data. Unreconciled data mismatching can impact the quality
of data. To address this issue, we proposed to eliminate a registration
process for groups submitting data using third party entities. When
groups submit data utilizing third party entities, such as a qualified
registry, QCDR, or EHR, we are able to obtain group information from
the third party entity and discern whether the data submitted
represents group submission or individual submission once the data are
submitted.
At Sec. 414.1310(e)(5), we proposed that a group must adhere to an
election process established and required by CMS, as described in this
section. We did not propose to require groups to register to have their
performance assessed as a group except for groups submitting data on
performance measures via participation in the CMS Web Interface or
groups electing to report the Consumer Assessment of Healthcare
Providers and Systems (CAHPS) for MIPS survey for the quality
performance category as described further in section II.E.5.b. of the
proposed rule. For all other data submission mechanisms, groups must
work with appropriate third party entities to ensure the data submitted
clearly indicates that the data represent a group submission rather
than an individual submission. In order for groups to elect
participation via the CMS Web Interface or administration of the CAHPS
for MIPS survey, we proposed that such groups must register by June 30
of the applicable 12-month performance period (that is, June 30, 2017,
for performance periods occurring in 2017). For the criteria regarding
group reporting applicable to the four MIPS performance categories, see
section II.E.5.a. of the proposed rule.
The following is a summary of the comments we received regarding
our proposal that requires a group participating via the CMS Web
Interface or electing to administer the CAHPS for MIPS survey to adhere
to an election process established and required by CMS.
Comment: Several commenters expressed support for CMS's effort to
ease the registration burden by not requiring registration or an
election process for groups other than those electing to use the CMS
Web Interface or CAHPS for MIPS survey for reporting of the quality
performance category.
Response: We appreciate the support from commenters regarding our
proposal.
Comment: One commenter expressed concern that clinicians who
attempt to use the CMS Web Interface will not know if they have
patients who satisfy reporting requirements until they attempt to
submit their data. The commenter did not support the registration
process required in order to select the use of the CMS Web Interface as
a submission mechanism. The commenter asked whether clinicians will be
able to elect other options once registration for the CMS Web Interface
closes.
Response: Similar to the process that has occurred in past years
under the PQRS program, we intend to provide the beneficiary sample to
the groups that have registered to participate via the CMS Web
Interface approximately 1 month prior to the start of the submission
period. The submission period for the CMS Web Interface will occur
during an 8-week period following the close of the performance period
that will begin no earlier than January 1 and end no later than March
31 (the specific start and end dates for the CMS Web Interface
submission period will be published on the CMS Web site). This is the
earliest the sample is available due to the timing required to
establish and maintain an effective sample size.
We encourage groups to review the measure specifications for each
data submission mechanism and select the data submission mechanism that
applies best to the group prior to registering to participate via the
CMS Web Interface. We want to note that groups can determine if they
would have Medicare beneficiaries to report data on behalf of for the
CMS Web Interface measures. Groups that register to use the CMS Web
Interface prior to the registration deadline (June 30) can cancel their
registration or change their selection to report at an individual or
group level only during the timeframe before the close of registration.
After consideration of the public comments we received, we are
finalizing the following policy:
A group must adhere to an election process established and
required by CMS (Sec. 414.1310(e)(5)), which includes:
++ Groups will not be required to register to have their
performance assessed as a group except for groups submitting data on
performance measures via participation in the CMS Web Interface or
groups electing to report the CAHPS for MIPS survey for the quality
performance category. For all other data submission methods, groups
must work with appropriate third party entities as necessary to ensure
the data submitted clearly indicates that the data represent a group
submission rather than an individual submission.
++ In order for groups to elect participation via the CMS Web
Interface or administration of the CAHPS for MIPS survey, such groups
must register by June 30 of the applicable performance period (that is,
June 30, 2017, for performance periods occurring in 2017).
Additionally, for operational purposes, we are considering the
establishment of a voluntary registration process, if technically
feasible, for groups that intend to submit data on performance measures
via a qualified registry, QCDR, or EHR, which will enable such groups
to specify whether or not they intend to participate as a group and
which submission
[[Page 77073]]
mechanism (qualified registry, QCDR, or EHR) they plan to use for
reporting data, and provide other applicable information pertaining to
the TIN/NPIs. In order for groups to know which requirements apply to
their group for data submission purposes in advance of the performance
period or submission period, we want to establish a mechanism that
would allow us to identify the data submission mechanism a group
intends to use and notify groups of the applicable requirements they
would need to meet for the performance year, if technically feasible.
We believe it is essential for groups to be aware of their applicable
requirements in advance and as a result, the only means that would
allow us to inform groups is dependent on us receiving such information
from groups through a voluntary registration process; otherwise, it is
impossible to contact groups without knowing who they are or inform
groups of applicable requirements without knowing whether or not a
group intends to report at the group level and the data submission
mechanism a group is planning to utilize. For groups that would not
voluntarily register, we would only be able to identify such groups
after the close of the submission period when data has been submitted.
To address this operational facet, we are considering the establishment
of a voluntary registration process similar to PQRS in that groups
would make an election of a data submission mechanism; however, based
on feedback we have received over the years from PQRS participants, the
voluntary registration process under MIPS would not restrict group
participation to the selected options, including individual- or group-
level reporting or a selected data submission mechanism, made by groups
during the voluntary registration process; groups would have the
flexibility to modify how they participate in MIPS.
With the optional participation in a voluntary registration
process, the assessment of a group's performance would not be impacted
by whether or not a group elects to participate in voluntary
registration. We note that if a group voluntarily registers,
information provided by the group would be used to proactively inform
MIPS eligible clinicians about the timeframe they would need to submit
data, which would be provided to the group during the performance
period. We intend to use the voluntary registration process as a means
to provide additional educational materials that are targeted and
tailored to such groups; and if technically feasible, provide such
groups with access to additional toolkits. We believe it is important
for groups to have such information in advance in order to prepare for
the submission of data. Also, we note that the voluntary registration
process differs from the registration process required for groups
electing to submit data via the CMS Web Interface, such that groups
registering on a voluntary basis would be able to opt out of group-
level reporting and/or modify their associated settings such as the
chosen submission mechanism at any time. The participation of a group
in MIPS via a data submission mechanism other than the CMS Web
Interface or a group electing to administer the CAHPS for MIPS survey
would not be contingent upon engagement in the voluntary registration
process. Whether or not a group elects to participate in voluntary
registration, a group must meet all of the requirements pertaining to
groups. We intend to issue further information regarding the voluntary
registration process for groups in subregulatory guidance.
e. Virtual Groups
(1) Implementation
Section 1848(q)(5)(I) of the Act establishes the use of voluntary
virtual groups for certain assessment purposes. The statute requires
the establishment and implementation of a process that allows an
individual MIPS eligible clinician or a group consisting of not more
than 10 MIPS eligible clinicians to elect to form a virtual group with
at least one other such individual MIPS eligible clinician or group of
not more than 10 MIPS eligible clinicians for a performance period of a
year. As determined in statute, individual MIPS eligible clinicians and
groups forming virtual groups are required to make such election prior
to the start of the applicable performance period under MIPS and cannot
change their election during the performance period. As discussed in
section II.E.4. of the proposed rule, we proposed that the performance
period would be based on a calendar year.
As we assessed the timeline for the establishment and
implementation of virtual groups and applicable election process and
requirements for the first performance period under MIPS, we identified
significant barriers regarding the development of a technological
infrastructure required for successful implementation and the
operationalization of such provisions that would negatively impact the
execution of virtual groups as a conducive option for MIPS eligible
clinicians or groups. The development of an electronic system before
policies are finalized poses several risks, particularly relating to
the impediments of completing and adequately testing the system before
execution and assuring that any change in policy made during the
rulemaking process are reflected in the system and operationalized
accordingly. We believe that it would be exceedingly difficult to make
a successful system to support the implementation of virtual groups,
and given these factors, such implementation would compromise not only
the integrity of the system, but the intent of the policies.
Additionally, we recognize that it would be impossible for us to
develop an entire infrastructure for electronic transactions pertaining
to an election process, reporting of data, and performance measurement
before the start of the performance period beginning on January 1,
2017. Moreover, the actual implementation timeframe would be more
condensed given that the development, testing, and execution of such a
system would need to be completed months in advance of the beginning of
the performance period in order to provide MIPS eligible clinicians and
groups with an election period.
During the implementation and ongoing functionality of other
programs such as PQRS, Medicare EHR Incentive Program, and VM, we
received feedback from stakeholders regarding issues they encountered
when submitting reportable data for these programs. With virtual groups
as a new option, we want to minimize potential issues for end-users and
implement a system that encourages and enables MIPS eligible clinicians
and groups to participate in a virtual group. A web-based registration
process, which would simplify and streamline the process for
participation, is our preferred approach. Given the aforementioned
dynamics discussed in this section, implementation for the CY 2017
performance period is infeasible as a result of the insufficient
timeframe to develop a web-based registration process. We have assessed
alternative approaches for the first year only, such as an email
registration process, but believe that there are limitations and
potential risks for numerous errors, such as submitted information
being incomplete or not in the required format. A manual verification
process would cause a significant delay in verifying registration due
to the lack of an automated system to ensure the accuracy of the type
of information submitted that is required for registration. We believe
that an email registration process could become
[[Page 77074]]
cumbersome and a burden for groups to pursue participation in a virtual
group. Implementation of a web-based registration system for CY 2018
would provide the necessary time to establish and implement an election
process and requirements applicable to virtual groups, and enable
proper system development and operations. We intend to implement
virtual groups for the CY 2018 performance period, and we intend to
address all of the requirements pertaining to virtual groups in future
rulemaking. We requested comments on factors we should consider
regarding the establishment and implementation of virtual groups.
The following is a summary of the comments we received regarding
our intention to implement virtual groups for the CY 2018 performance
period and factors we should consider regarding the establishment and
implementation of virtual groups.
Comment: Many commenters supported the development of virtual
groups. Some commenters noted that virtual groups are needed because
some patients require multidisciplinary care in and out of a hospital
and practice.
Response: We appreciate the support from commenters.
Comment: Several commenters supported CMS' decision not to
implement virtual groups in year 1 in order to allow for the successful
technological infrastructure development and implementation of virtual
groups, but requested that CMS outline the criteria and requirements
regarding the execution of virtual groups as soon as possible. Several
commenters recommended that CMS use year 1 to develop the much-needed
guidance and assistance that outlines the steps groups would need to
take in forming virtual groups, such as drafting written agreements and
developing additional skills and tools.
Response: We appreciate the support from commenters regarding the
delay in the implementation of virtual groups. We intend to utilize
this time to work with the stakeholder community to further advance the
framework for virtual groups.
Comment: Multiple commenters expressed concern that virtual groups
would not be implemented in year 1 and requested that CMS
operationalize the virtual group option immediately. A few commenters
indicated that the delay would impact small and solo practices and
rural clinicians. Some commenters requested that in the absence of the
virtual group option, small and solo practices and rural clinicians
should be eligible for positive payment adjustments, but exempt from
any negative payment adjustment. The commenters stated that exempting
these physicians from negative payment adjustments would better
incentivize the pursuit of quality and performance improvement among
solo and small practices. A few commenters recommended that all
practices of 9 or fewer physicians be exempt from MIPS or APM
requirements until the virtual group option has been tested and is
fully operational. One commenter suggested that as an alternative to
delaying the implementation of virtual groups, CMS should allow virtual
groups to report performance data on behalf of small practices and
HPSAs for the CY 2017 performance period.
Response: As noted in the proposed rule, we identified significant
barriers regarding the development of a technological infrastructure
required for successful implementation and operationalization of the
provisions pertaining to virtual groups. As a result, we believe that
it would be technically infeasible to make a successful system to
support the implementation of virtual groups for year 1. Also, we note
that clinicians who are considered MIPS eligible clinicians are
required to participate in MIPS unless they are eligible for one of the
exclusions established in this final rule with comment period (see
section II.E.3. of this final rule with comment period); thus, a MIPS
eligible clinician participating in MIPS either as an individual or
group will be subject to a payment adjustment whether it is positive,
neutral, or negative. The Act does not provide discretion to only apply
a payment adjustment when a MIPS eligible clinician receives a positive
payment adjustment. In regard to the request to allow virtual groups to
have an alternative function for year 1, we intend to implement virtual
groups in a manner consistent with the statute.
Comment: A few commenters recommended that CMS redirect funds from
the $500 million set aside for bonus payments to top performers toward
financing a ``safe harbor'' for solo and small practices and rural
providers.
Response: This is not permissible by statute, as the $500 million
is available only for MIPS eligible clinicians with a final score at or
above the additional performance threshold.
Comment: Several commenters identified several factors CMS should
consider as it develops further policies relating to virtual groups,
including the following: Ensuring that virtual groups have shared
accountability for performance improvement; limiting the submission
mechanisms to those that require clinicians in the virtual group to
collaborate on ongoing quality analysis and improvement; maintaining
flexibility for factors being considered for virtual groups;
implementing a virtual group pilot to be run prior to 2018
implementation; and hosting listening sessions to receive input and
feedback on this option with specialty societies and other
stakeholders. Several commenters requested that CMS avoid placing
arbitrary limits on minimum or maximum size, geography proximity, or
specialty of virtual groups, but allow virtual groups to determine
group size, geographic affiliations, and group composition. One
commenter encouraged CMS to explore broad options for virtual groups
outside the norm of TIN/NPI grouping. However, a few commenters
recommended that virtual groups be limited to practices of same or
similar specialties or clinical standards. Another commenter requested
more detail on the implementation of virtual groups.
A few commenters recommended the following minimum standards for
members of a virtual group: Have mutual interest in quality
improvement; care for similar populations; and be responsible for the
impact of their decisions on the whole group. A few commenters
suggested that virtual groups should not have their performance ratings
compared to other virtual groups, but instead, virtual groups should
have their performance ratings compared to their annual performance
rating during the initial implementation of virtual groups given that
each virtual group's clinicians and beneficiaries may have varying risk
preventing a direct comparison.
Response: We appreciate the suggestions from the commenters and as
a result of the recommendations, we are interested in obtaining further
input from stakeholders regarding the types of provisions and elements
that should be considered as we develop requirements applicable to
virtual groups. Therefore, we are seeking additional comment on the
following issues for future consideration: The advantages and
disadvantages of establishing minimum standards, similar to those
suggested by commenters as noted above; the types of standards could be
established for members of a virtual group; the factors would need to
be considered in establishing a set of standards; the advantages and
disadvantages of requiring members of a virtual group to adhere to
minimum standards; the types of factors or parameters could be
considered in developing a virtual group framework to ensure that
virtual groups would be able to effectively use
[[Page 77075]]
their data for meaningful analytics; the advantages and disadvantages
of forming a virtual group pilot in preparation for the development and
implementation of virtual groups; the framework elements could be
included to form a virtual group pilot.
As we develop requirements applicable to virtual groups, we will
also consider the ways in which virtual groups will each have unique
characteristic compositions and varying patient populations and how the
performance of virtual groups will be assessed, scored, and compared.
We are committed to pursuing the active engagement of the stakeholders
throughout the process of establishing and implementing virtual groups.
Comment: Several commenters recognized the potential value of
virtual groups to ease the burden of reporting under MIPS. Commenters
recommended that CMS expand virtual groups to promote the adoption of
activities that enhance care coordination and improve quality outcomes
that are often out of reach for small practices due to limited
resources; encourage virtual groups to establish shared clinical
guidelines, promote clinician responsibility, and have the ability to
track, analyze, and report performance results; and promote
information-sharing and collaboration among its clinicians.
Response: We appreciate the suggestions from the commenters and as
a result of the recommendations, we are interested in obtaining further
input from stakeholders regarding the technical and operational
elements and data analytics/metrics that should be considered as we
develop requirements applicable to virtual groups. Therefore, we are
seeking additional comment on the following issues for future
consideration: The types of requirements that could be established for
virtual groups to promote and enhance the coordination of care and
improve the quality of care and health outcomes; and the parameters
(for example, shared patient population), if any, could be established
to ensure virtual groups have the flexibility to form any composition
of virtual group permissible under the Act while accounting for virtual
groups reporting on measures across the four performance categories
that are collectively applicable to a virtual group given that the
composition of virtual groups could have many differing forms. We
believe that each MIPS eligible clinician who is part of a virtual
group has a shared responsibility in the performance of the virtual
group and the formation of a virtual group provides an opportunity for
MIPS eligible clinicians to share and potentially streamline best
practices.
Comment: One commenter requested clarification on what constitutes
a virtual group and how virtual groups will be formed. The commenter
recommended that performance for individual MIPS eligible clinicians in
virtual groups should be based on specialty-specific measures. The
commenter also recommended that, when assessing performance, CMS should
develop sufficient risk adjustment mechanisms that ensure MIPS eligible
clinicians are only scored on the components of care they have control
over, and CMS should develop robust and appropriate attribution
methods. Another commenter recommended that CMS require virtual groups
to demonstrate a reliable mechanism for establishing patient
attribution as well as the ability to report throughout the performance
period.
Response: We will consider these suggestions as we develop
requirements applicable to virtual groups in future rulemaking. In
regard to the commenter's request for clarification regarding what
constitutes a virtual group and how they are formed, we note that
section 1848(q)(5)(I) of the Act requires the establishment and
implementation of a process that allows an individual MIPS eligible
clinician or a group consisting of not more than 10 MIPS eligible
clinicians to elect to form a virtual group with at least one other
such individual MIPS eligible clinician or group of not more than 10
MIPS eligible clinicians for a performance period of a year.
Comment: One commenter suggested that virtual groups could be
organized similarly to the current PQRS GPRO, in which virtual groups
would have the flexibility to select both quality and resources use
measures once they are further developed.
Response: We want to clarify that there is no virtual group
reporting or similar option under PQRS. We note that virtual groups are
not a data submission mechanism. MIPS eligible clinicians would have
the option to participate in MIPS as individual MIPS eligible
clinicians, groups, or, following implementation, virtual groups.
Comment: One commenter recommended the use of third-party
certifications to assist with emerging virtual groups. The commenter
also suggested that CMS provide bonus points for clinicians that
register as virtual groups, similar to electronic reporting of quality
measures.
Response: We will consider these suggestions as we develop
requirements for virtual groups in future rulemaking.
Comment: A few commenters encouraged CMS to assess many of the
virtual group challenges associated with EHR technology. One commenter
stated that most small independent clinician offices do not use the
same EHR technology as their neighbors, and virtual groups would create
reporting and measurement challenges, especially with respect to the
advancing care information performance category; the commenter
suggested that CMS provide attestation as an option.
Another commenter indicated that the implementation of virtual
groups could be unsuccessful based on the following factors: There is
no necessary consistency in the nomenclature and methods used by
different health IT vendors and developers, which would prevent
prospective virtual group members from correctly understanding the
degree and nature of the differences in approaches regarding data
collection and submission; any vendor-related issues would be combined
in unpredictable ways within virtual groups, causing the datasets to
not correspond categorically and having inconsistent properties among
the datasets; there is the prospect of a mismatch of properties for
virtual group members on assessed measures, where neither excellence
nor laggardly work would be clearly visible; and there is a risk of a
practice joining a virtual group with ``free riders,'' which would
result in a churning of membership and a serious loss of year-to-year
comparison capabilities. In order to address such issues, the commenter
recommended that CMS develop a system that includes the capability for
clinicians and groups to participate in a service similar to online
dating service applications that would allow clinicians and groups to
use self-identifying descriptors to select their true peers within
similar CEHRT.
A few commenters requested clarification regarding the approved
methods for submitting and aggregating disparate clinician data for
virtual groups, and whether or not new clinicians should be included in
virtual groups if they have not been part of the original TIN
throughout the reporting year.
Response: We thank the commenters for providing suggestions and
identifying potential health IT challenges virtual groups may encounter
regarding the reporting and submission of data. As a result of the
recommendations and identification of potential barriers, we are
interested in
[[Page 77076]]
obtaining further input from stakeholders on these issues as we
establish provisions pertaining to virtual groups and build a
technological infrastructure for the operationalization of virtual
groups. Therefore, we are seeking comment on the following issues for
future consideration: The factors virtual groups would need to consider
and address in order for the reporting and submission of data to be
streamlined in a manner that allows for categorization of datasets and
comparison capabilities; the factors an individual clinician or small
practice who are part of a virtual group would need to consider in
order for their CEHRT to have interoperability with other CEHRT if part
of a virtual group; the advantages and disadvantages of having members
of a virtual group use one form of CEHRT; the potential barriers that
may make it difficult for virtual groups to be prepared to have a
collective, streamlined system to capture measure data; and the
timeframe virtual groups would need in order to build a system or
coordinate a systematic infrastructure that allows for a collective,
streamlined capturing of measure data.
Comment: One commenter suggested having Virtual Integrated Clinical
Networks (VICN) as an alternative type of delivery system within the
Quality Payment Program. The commenter further indicated that the
development of VICNs can lead to better patient care and lower costs by
including only physicians and other clinicians who commit to value-
based care at the outset. The commenter noted that in order to
participate, clinicians would have to agree to work and practice in a
value-based way, with transparency of patient satisfaction, clinical
outcomes, and cost results.
Response: We will consider the suggestion as we develop the
framework and requirements for virtual groups.
Comment: One commenter suggested that CMS change the name of
virtual groups to virtual network since a group includes coordination
of a wide range of physician and related ancillary services under one
roof that is seamless to patients while the term ``network'' implies
more of an alignment of multiple group practices and clinicians
operating across the medical community for purposes of reporting in
MIPS.
Response: We will consider the suggestion as we establish the
branding for virtual groups.
Comment: Multiple commenters did not support virtual groups being
limited to groups consisting of not more than 10 MIPS eligible
clinicians to form a virtual group with at least one other MIPS
eligible clinician or group of not more than 10 MIPS eligible
clinicians.
Response: With regard to commenters not supporting the composition
limit of virtual groups, we note that section 1848(q)(5)(I) of the Act
requires the establishment and implementation of a process that allows
an individual MIPS eligible clinician or a group consisting of not more
than 10 MIPS eligible clinicians to elect to form a virtual group with
at least one other such individual MIPS eligible clinician or group of
not more than 10 MIPS eligible clinicians for a performance period of a
year. Thus, we do not have the authority to modify this statutory
provision.
Comment: A few commenters requested that CMS work with clinician
communities as it establishes the framework for the virtual group
option. Commenters recommended that CMS protect against antitrust
issues that may arise regarding physician collaboration to recognize
economies of scale. One commenter indicated that accreditation entities
have experience with the Federal Trade Commission (FTC) rules related
to clinically integrated networks formed to improve the quality and
efficiency of care delivered to patients and that publicly vetted
accreditation standards could guide the development of virtual groups
in a manner that incentivizes sustainable growth as integrated networks
capable of long-term success under value-based reimbursement.
Response: We will consider the recommendations provided as we
develop requirements pertaining to virtual groups.
Comment: One commenter recommended that in future rulemaking, CMS
create a unique identifier for virtual groups, allow multiple TINs and
split TINs, avoid thresholds based on the number of patients treated,
avoid restricting the number of participants in virtual groups, and
avoid limitations on the number of virtual groups. Another commenter
suggested that virtual groups should be reporting data at either the
TIN level, NPI/TIN level, or APM level.
Response: We appreciate the recommendations from the commenters and
as a result of the suggestions, we are interested in obtaining further
input from stakeholders regarding a group identifier for virtual
groups. Therefore, we are seeking additional comment for future
consideration on the following: The advantages and disadvantages of
creating a new identifier for virtual groups; and the potential options
for establishing an identifier for virtual groups. We intend to explore
this issue.
We thank the commenters for their input regarding our intention to
implement virtual groups for the CY 2018 performance period and factors
we should consider regarding the establishment and implementation of
virtual groups. We intend to explore the types of requirements
pertaining to virtual groups, including, but not limited to, defining a
group identifier for virtual groups, establishing the reporting
requirements for virtual groups, identifying the submission mechanisms
available for virtual group participation, and establishing
methodologies for how virtual group performance will be assessed and
scored. In addition, during the CY 2017 performance period, we will be
convening a user group of stakeholders to receive further input on the
factors CMS should consider in establishing the requirements for
virtual groups and identify mechanisms for the implementation of
virtual groups in future years.
(2) Election Process
Section 1848(q)(5)(I)(iii)(I) of the Act provides that the election
process must occur prior to the performance period and may not be
changed during the performance period. We proposed to establish an
election process that would end on June 30 of a calendar year preceding
the applicable performance period. During the election process, we
proposed that individual MIPS eligible clinicians and groups electing
to be a virtual group would be required to register in order to submit
reportable data. Virtual groups would be assessed across all four MIPS
performance categories. In future rulemaking, we will address all
elements relating to the election process and outline the criteria and
requirements regarding the formation of virtual groups. We solicited
public comments on this proposal.
The following is summary of the comments we received regarding our
proposals that apply to virtual groups, including: The establishment of
an election process that would end on June 30 of a calendar year
preceding the applicable performance period; the requirement of
individual MIPS eligible clinicians and groups electing to be a virtual
group to register in order to submit reportable data; and the
assessment of virtual groups across all four MIPS performance
categories.
Comment: A few commenters requested that CMS reconsider the
deadline by which virtual groups would be required to make an election
to participate in MIPS. One commenter recommended that the deadline
should be 90 days before the performance period as opposed to 6 months.
[[Page 77077]]
Response: We will consider the recommendations as we establish the
election process for virtual groups.
Comment: One commenter indicated that a registration process for
the virtual group option would be an unnecessary burden and recommended
that registration by virtual groups should only be required if the
group participates in MIPS via the CMS Web Interface. Another commenter
expressed concern that without a manageable registration system for
virtual groups, there would be too many loopholes, which would add
confusion to the program.
Response: We appreciate the commenters providing recommendations
and we will consider the recommendations as we establish the virtual
group registration process.
After consideration of the public comments we received, and with
the delay of virtual group implementation, we are not finalizing our
proposal to establish a virtual group election process that would end
on June 30 for the CY 2017 performance period; the proposed requirement
of individual MIPS eligible clinicians and groups electing to be a
virtual group to register in order to submit reportable data; or the
proposed assessment of virtual groups across all four MIPS performance
categories.
4. MIPS Performance Period
MIPS incorporates many of the requirements of several programs into
a single, comprehensive program. This consolidation includes key policy
goals as common themes across multiple categories such as quality
improvement, patient and family engagement, and care coordination
through interoperable health information exchange. However, each of
these legacy programs included different eligibility requirements,
reporting periods, and systems for clinicians seeking to participate.
This means that we must balance potential impacts of changes to systems
and technical requirements to successfully synchronize reporting, as
noted in the discussion regarding the definition of a MIPS eligible
clinician in the proposed rule (81 FR 28173). We must take operational
feasibility, systems impacts, and education and outreach on
participation into account in developing technical requirements for
participation. One area where this is particularly important is in the
definition of a performance period.
MIPS applies to payments for items and services furnished on or
after January 1, 2019. Section 1848(q)(4) of the Act requires the
Secretary to establish a performance period (or periods) for a year
(beginning with 2019). Such performance period (or periods) must begin
and end prior to such year and be as close as possible to such year. In
addition, section 1848(q)(7) of the Act provides that, not later than
30 days prior to January 1 of the applicable year, the Secretary must
make available to each MIPS eligible clinician the MIPS adjustment
(and, as applicable, the additional MIPS adjustment) applicable to the
MIPS eligible clinician for items and services furnished by the MIPS
eligible clinician during the year.
We considered various factors when developing the policy for the
MIPS performance period. Stakeholders have stated that having a
performance period as close to when payments are adjusted is
beneficial, even if such period would be less than a year. We have also
received feedback from stakeholders that they prefer having a 1 year
performance period and have further suggested that the performance
period start during the calendar year (for example, having the
performance period occurring from July 1 through June 30). We
additionally considered operational factors, such as that a 1 year
performance period may be beneficial for all four performance
categories because many measures and activities cannot be reported in a
shorter time frame. We also considered that data submission activities
and claims for items and services furnished during the 1 year
performance period (which could be used for claims- or administrative
claims-based quality or cost measures) may not be fully processed until
the following year.
These circumstances will require adequate lead time to collect
performance data, assess performance, and compute the MIPS adjustment
so the applicable MIPS adjustment can be made available to each MIPS
eligible clinician at least 30 days prior to when the MIPS payment
adjustment is applied each year. For 2019, these actions will occur
during 2018. In other payment systems, we have used claims that are
processed within a specified time period after the end of the
performance period, such as 60 or 90 days, for assessment of
performance and application of the MIPS payment adjustment. For MIPS,
we proposed at Sec. 414.1325(g)(2) to use claims that are processed
within 90 days, if operationally feasible, after the end of the
performance period for purposes of assessing performance and computing
the MIPS payment adjustment. We proposed that if we determined that it
is not operationally feasible to have a claims data run-out for the 90-
day timeframe, then we would utilize a 60-day duration in the calendar
year immediately following the performance period.
This proposal does not affect the performance period per se, but
rather the deadline by which claims for items and services furnished
during the performance period need to be processed for those items and
services to be included in our calculation. To the extent that claims
are used for submitting data on MIPS measures and activities to us,
such claims would have to be processed by no later than 90 days after
the end of the applicable performance period, in order for information
on the claims to be included in our calculations. As noted in this
section, if we determined that it is not operationally feasible to have
a claims data run-out for the 90-day timeframe, then we would utilize a
60-day duration. As an alternative to our proposal, we also considered
using claims that are paid within 60 days after 2017, for assessment of
performance and application of the MIPS payment adjustment for 2019. We
solicited comments on both approaches.
Given the need to collect and process information, we proposed at
Sec. 414.1320 that for 2019 and subsequent years, the performance
period under MIPS would be the calendar year (January 1 through
December 31) 2 years prior to the year in which the MIPS adjustment is
applied. For example, the performance period for the 2019 MIPS
adjustment would be the full CY 2017, that is, January 1, 2017 through
December 31, 2017. We proposed to use the 2017 performance year for the
2019 MIPS payment adjustment consistent with other CMS programs. This
approach allows for a full year of measurement and sufficient time to
base adjustments on complete and accurate information.
For individual MIPS eligible clinicians and groups with less than
12 months of performance data to report, such as when a MIPS eligible
clinician switches practices during the performance period or when a
MIPS eligible clinician may have stopped practicing for some portion of
the performance period (for example, a MIPS eligible clinician who is
on family leave, or has an illness), we proposed that the individual
MIPS eligible clinician or group would be required to report all
performance data available from the performance period. Specifically,
if a MIPS eligible clinician is reporting as an individual, they would
report all partial year performance data. Alternatively, if the MIPS
eligible clinician is reporting with a group, then the group would
report all
[[Page 77078]]
performance data available from the performance period, including
partial year performance data available for the individual MIPS
eligible clinician.
Under this approach, MIPS eligible clinicians with partial year
performance data could achieve a positive, neutral, or negative MIPS
adjustment based on their performance data. We proposed this approach
to incentivize accountability for all performance during the
performance period. We also believe these policies would help minimize
the impact of partial year data. First, MIPS eligible clinicians with
volume below the low-volume threshold would be excluded from any MIPS
payment adjustments. Second, MIPS eligible clinicians who report
measures, yet have insufficient sample size, would not be scored on
those measures and activities. Refer to section II.E.6. of this final
rule with comment period for more information on scoring.
To potentially refine this proposal in future years, we solicited
comments on methods to accurately identify MIPS eligible clinicians
with less than a 12-month reporting period, notwithstanding common and
expected absences due to illness, vacation, or holiday leave. Reliable
identification of these MIPS eligible clinicians would allow us to
analyze the characteristics of MIPS eligible clinicians' patient
population and better understand how a reduced reporting period impacts
performance.
We also solicited public comment on an alternative approach for
future years for assessment of individual MIPS eligible clinicians with
less than 12 months of performance data in the performance year. For
example, if we can identify such MIPS eligible clinicians and confirm
there are data issues that led to invalid performance calculations,
then we could score the MIPS eligible clinician with a final score
equal to the performance threshold, which would result in a zero MIPS
payment adjustment. We note this approach would not assess a MIPS
eligible clinicians' performance for partial-year performance data. We
do not believe that consideration of partial year performance is
necessary for assessment of groups, which should have adequate coverage
across MIPS eligible clinicians to provide valid performance
calculations.
We also solicited comment on reasonable thresholds for considering
performance that is less than 12 months. For example, we expect that
some MIPS eligible clinicians will take leave related to illness,
vacation, and holidays. We would not anticipate applying special
policies for lack of performance related to these common and expected
absences assuming MIPS eligible clinicians' quality reporting includes
measures with sufficient sample size to generate valid and reliable
scores. We solicited comment on how to account for MIPS eligible
clinicians with extended leave that may affect measure sample size.
We solicited comments on these proposals and approaches. The
following is summary of the comments we received regarding our
proposals for the MIPS performance period.
Comment: Numerous commenters believed that the first MIPS
performance period should be delayed or treated as a transition year.
The commenters stated that the proposed timeline for implementation was
too compressed, unrealistic, and aggressive. They cited numerous
educational and readiness factors for the recommended delay including:
Time needed for stakeholders to digest the final rule with comment
period and engage in further education and to make the necessary
modifications to their practices, not overly burden their systems with
such a short implementation time, and time needed to establish the
administrative and technological tools necessary to meet the reporting
requirements. The commenters suggested numerous alternative start dates
to allow what the commenters believed would be sufficient time for MIPS
eligible clinicians to prepare for reporting, ranging from a 2-year
delay in implementation, using CY 2018 as the initial assessment period
for MIPS, a start date no less than 15 months between the adoption of
the final rule with comment period and its implementation, a start date
no earlier than July 1, 2017, and lastly a start date of April 1, 2017.
Response: We appreciate the suggestions and have examined the
issues raised closely. We agree with the commenters that to ensure a
successful implementation of the MIPS, providing MIPS eligible
clinicians' additional time to prepare their practices for reporting
under MIPS is needed. Therefore, we have decided to finalize a
modification of our proposal for the performance period for the
transition year of MIPS to provide flexibility to MIPS eligible
clinicians as they familiarize themselves with MIPS requirements in
2017 while maintaining reliability. Therefore, we are finalizing at
Sec. 414.1320(a)(1) that for purposes of the 2019 MIPS payment year,
the performance period for all performance categories and submission
mechanisms except for the cost performance category and data for the
quality performance category reported through the CMS Web Interface,
for the CAHPS for MIPS survey, and for the all-cause hospital
readmission measure, is a minimum of a continuous 90-day period within
CY 2017, up to and including the full CY 2017 (January 1, 2017 through
December 31, 2017). Thus, MIPS eligible clinicians will only need to
report for a minimum of a continuous 90-day period within CY 2017, for
the majority of the submission mechanisms. This 90-day period can occur
anytime within CY 2017, so long as the 90-day period begins on or after
January 1, 2017, and ends on or before December 31, 2017. We note that
the continuous 90-day period is a minimum; MIPS eligible clinicians may
elect to report data on more than a continuous 90-day period, including
a period of up to the full 12 months of 2017. For groups that elect to
utilize the CMS Web Interface or report the CAHPS for MIPS survey, we
note that these submission mechanisms utilize certain assignment and
sampling methodologies that are based on a 12-month performance period.
In addition, administrative claims-based measures (this includes all of
the cost measures and the all-cause hospital readmission measure), are
based on attributed population using the 12-month period. Additionally,
we are finalizing at Sec. 414.1320(a)(2) that for purposes of the 2019
MIPS payment year, for data reported through the CMS Web Interface or
the CAHPS for MIPS survey and administrative claims-based cost and
quality measures, the performance period under MIPS is CY 2017 (January
1, 2017 through December 31, 2017). Please note that, unless otherwise
stated, any reference in this final rule with comment period to the
``CY 2017 performance period'' is intended to be an inclusive reference
to all performance periods occurring during CY 2017. More details on
these submission mechanisms are covered in section II.E.5.a.2. of this
final rule with comment period.
We believe the flexibilities we are providing in our modified
proposal discussed above will provide time for stakeholders to engage
in further education about the new requirements and make the necessary
modifications to their practices to accommodate reporting under the
MIPS. We note that the continuous 90-day period of time required for
reporting can occur at any point within the CY 2017 performance period,
up until and including October 2, 2017, which is the last date that the
continuous 90-day period of time required for reporting can begin and
end within the CY 2017 performance period.
For the second year under the MIPS, we are finalizing our proposal
to require reporting and performance assessment
[[Page 77079]]
for the full CY performance period for purposes of the quality and cost
performance categories. Specifically, we are finalizing at Sec.
414.1320(b)(1) that for the 2020 MIPS adjustment, for purposes of the
quality and cost performance categories, the performance period is CY
2018 (January 1, 2018 through December 31, 2018). We do believe,
however, that for the improvement activities and advancing care
information performance categories, utilizing a continuous 90-day
period that occurs during the 12-month MIPS performance period will
assist MIPS eligible clinicians as they continue to familiarize
themselves with the requirements under the MIPS. Additionally, to allow
MIPS eligible clinicians and groups adequate time to transition to
technology certified to the 2015 Edition for use in CY 2018, we believe
it is appropriate to allow reporting on any continuous 90-day period
that occurs during the 12-month MIPS performance period for the
advancing care information performance category in CY 2018.
Specifically, for the improvement activities and advancing care
information performance categories, we are finalizing at Sec.
414.1320(b)(2) that the performance period under MIPS is a minimum of a
continuous 90-day period within CY 2018, up to and including the full
CY 2018 (January 1, 2018 through December 31, 2018).
Comment: Other commenters suggested making 2018 the first
performance period for the first payment year of 2019. They stated that
MIPS eligible clinicians could receive more timely feedback on their
performance and still have the opportunity to make improvements in the
second half of 2017 before the first performance period would begin.
Response: It is not technically feasible to establish the first
performance period in 2018 and begin applying MIPS payment adjustments
in 2019. Some of the factors involved include: Allowing for a data
submission period that occurs after the close of the performance
period, running our calculation and scoring engines to calculate
performance category scores and final score, allowing for a targeted
review period, establishing and maintaining budget neutrality and
issuance of each MIPS eligible clinician's specific MIPS payment
adjustment. Based on our experience under the PQRS, VM, and Medicare
EHR Incentive Program for Eligible Professionals, all of these
activities on average take upwards of 9-12 months. We will continue to
examine these operational processes to add efficiencies and reduce this
timeframe in future years.
Comment: Other commenters noted that MIPS eligible clinicians
ideally require 18 to 24 months' time to adequately identify, adopt,
and apply measures to established workflows for consistent data
capture. The commenters also noted that most MIPS eligible clinicians
are not yet comfortable with ICD-10 and added that there are 1491 new
ICD-10 CM codes becoming effective in October 2016, and that MIPS
eligible clinicians would not have sufficient time to refine processes
within the proposed timeline (that is, by January 1, 2017).
Response: We are finalizing a modified CY 2017 performance period,
as discussed above. We believe this will allow MIPS eligible clinicians
to adequately identify, adopt, and apply measures to establish
workflows for consistent data capture as they familiarize themselves
with MIPS requirements in 2017. We appreciate the concern raised by the
commenters on the introduction of the new ICD-10 codes. However, we
note that there are numerous resources available to assist commenters
on incorporating these codes into their workflows at https://www.cms.gov/medicare/Coding/ICD10/index.html.
Comment: Another commenter requested more time for clinicians and
payers other than Medicare to make adjustments to programs and amend
large numbers of significant risk[hyphen]based contracts between states
and health plans, and between health plans and their network delivery
system individual practice associations (IPAs), groups, and clinicians.
The commenter stated that this would allow time for significant
contract and subcontract amendments for other payers, and system
changes for metrics, claims, and benefit systems.
Response: We believe the flexibilities we are providing in the
first performance period, as discussed in this final rule with comment
period, will allow MIPS eligible clinicians and third party
intermediaries the time needed to update their systems to meet program
requirements and amend any agreements as necessary.
Comment: Some commenters were concerned that setting the
performance period too soon would not give third party intermediaries,
such as EHR vendors, qualified registries, health IT vendors, and
others the time needed to update their systems to meet program
requirements. The commenters recommended setting the performance period
later to allow these third party intermediaries time to validate new
data entry and testing tools and overhaul their systems to comply with
2015 edition certification requirements. Another commenter believed the
proposed policies would often require the use of multiple database
systems that could not be accomplished in the time required.
Response: We agree with the commenters that ensuring that third
party intermediaries have sufficient time to update their technologies
and systems will be a key component of ensuring that MIPS eligible
clinicians are ready to meet program requirements. We believe the
flexibilities we are providing in the first performance period, as
discussed in this final rule with comment period, will allow third
party intermediaries the time needed to update their systems to support
MIPS eligible clinician participation. We note that there are no new
certification requirements required for the Quality Payment Program and
many health IT vendors have already begun work toward the 2015 Edition
certification criteria which were finalized in October 2015. We believe
that the flexibility offered and the lead time to required use of
technology certified to the 2015 Edition, will mitigate these concern;
however, we intend to monitor health IT development progress, adoption
and implementation, and the readiness of QCDRs, health IT vendors, and
other third parties supporting MIPS eligible clinician participation.
Comment: Another commenter believed a later start date would
provide CMS with more time to address several issues that were absent
from the proposed rule, including the development of virtual groups,
improved risk-adjustment and attribution methods, further refinement of
episode-based resource measures and measurement tools and enhanced data
feedback to participants. One commenter stated that they believed that
the government programs that regulate and support MIPS have yet to be
designed, tested, and implemented. The commenter stated they do not
have MIPS performance thresholds or measure benchmark data and
therefore cannot prepare their office to streamline the new processes
and report appropriately in 2017.
Response: We respectfully disagree with the commenter and intend to
address further refinements to the MIPS program in future years. We
appreciate the commenter's desire to delay the start of the MIPS until
we are able to have full implementation of these factors. However, as
we have noted in other sections within this final rule with comment
period we intend to implement these provisions when technically
feasible, as in the case of
[[Page 77080]]
virtual groups, and when available, as in the case of improved risk-
adjustment and attribution methods as well as additional episode-based
resource measures. Additionally, as noted in section II.E.10. of this
final rule with comment period, we intend to provide feedback to
participants as required by statute, and we will enhance these feedback
efforts over time. Lastly, as indicated in section II.E.6.a. of this
final rule with comment period, due to the additional factors we are
incorporating to simplify our scoring methodology, we have published
the MIPS performance threshold in this final rule with comment period,
and we will publish the measure benchmarks where available prior to the
beginning of the performance period.
Comment: Several commenters recommended that the first performance
period occur later than January 1, 2017 based on commenters' analysis
of the MACRA statute. Some commenters believe a delayed start date of
July 1, 2017 would better match Congressional intent that the
performance period be as close to the MIPS payment adjustment period as
possible, while still allowing for the related MIPS payment adjustments
to take place in 2019. The commenters further recommended that CMS use
the time between the publication of the final rule with comment period
and a delayed performance period start date to test and refine the
performance feedback mechanisms for the Quality Payment Program. The
commenters stated that by including the ``as close as possible''
language in section 1848(q)(4) of the Act, the Congress sought to urge
CMS to select a performance period that will close the gap on CMS's
practice of setting a 2-year look-back period for Medicare quality
programs.
Response: We appreciate the commenters concerns about Congressional
intent for having a performance period as close as possible to the
related MIPS payment adjustments. However, we believe our proposal is
consistent with section 1848(q)(4) of the Act, as a performance period
that occurs 2 years prior to the payment year is as close to the
payment year as is currently possible. As noted above, from our
experiences under the PQRS, VM, and Medicare EHR Incentive Program for
Eligible Professionals, it takes approximately 9-12 months to perform
the operational processes to produce a comprehensive and accurate list
of MIPS eligible clinicians to receive a MIPS payment adjustment. We
will continue to assess this timeframe for efficiencies in the future.
Comment: Some commenters noted that section 1848(s) of the Act, as
added by section 102 of MACRA, requires a quality measure development
plan with annual progress reports, the first of which must be issued by
May 1, 2017. The commenters stated that by starting the Quality Payment
Program on January 1, 2017, before the first annual progress report is
finalized, CMS will not have finalized key program requirements before
it begins MIPS.
Response: We note that the commenters are referring to 2 separate
requirements under section 1848(s) of the Act. The quality measure
development plan, known as the CMS Quality Measure Development Plan
(MDP), was finalized and posted on May 2, 2016, which is available at
https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf and
required to be updated as appropriate. In addition, the MDP Annual
Report, which is to report on progress in developing measures, is
required to be posted annually beginning not later than May 1, 2017. We
intend to post the initial MDP Annual Report on May 1, 2017. While
these statutory requirements are mandatory and support the development
of the MIPS program, they are not prerequisites for the implementation
of the MIPS program.
Comment: Several commenters stated that the performance period was
too early and suggested that CMS create an initial transitional
performance period or phase-in period for the MIPS program. These
commenters recommended numerous modifications and advantages as part of
the transitional or phase-in period including: Phasing in some of the
performance requirements such as requiring fewer quality measures and/
or improvement activities in the transition year, creation of gradual
performance targets which would allow sufficient time for participants
to adapt to data collection and reporting prior to increasing
performance standards, and phasing in the MIPS adjustment amounts such
as applying a maximum MIPS payment adjustment of 2 percent in the
transition year of the program, or applying negative MIPS adjustments
only to groups of MIPS eligible clinicians above a certain size. These
commenters noted the advantages of a transitional or phase-in period
include allowing CMS to offset its concerns around calculation of
outcome and claims-based measures, the feasibility of using different
reporting mechanisms, meeting statutory deadlines, postponing changes
to the advancing care information performance category and the
capability of CMS' internal processes.
The commenters suggested various dates for the transitional or
phase-in period such as: January 1, 2017 through June 30, 2017, July 1,
2017 through December 31, 2017, allowing MIPS eligible clinicians to
select a 6-month performance period or allowing MIPS eligible
clinicians to use the full calendar year with an optional look-back to
January 1 in 2017. The commenters requested that CMS provide technical
assistance and a submission verification process during the transition
period.
Response: We agree with the commenters that there are numerous
advantages to having a transitional or phase-in period for the
transition year. As indicated previously in this section of this final
rule with comment period, we have modified the performance period for
the transition year to occur for a minimum of one continuous 90-day
period up to a full calendar year within CY 2017 for all data in a
given performance category and submission mechanism. We believe that
this modified performance period as well as the modifications we are
making to our scoring methodology as reflected in section II.E.6. of
this final rule with comment period address a number of the concerns
the commenters have raised. Lastly, we note that section 1848(q)(6) of
the Act requires us to apply the MIPS adjustment based on a linear
sliding scale and an adjustment factor of an applicable percent, which
the statute defines as 4 percent for 2019. We do not have the
discretion to apply a smaller adjustment factor to MIPS eligible
clinicians such as only 2 percent.
Comment: Multiple commenters recommended that 2017 be utilized for
reporting purposes only and not payment purposes. Their recommendations
ranged from having 2017 function as a straightforward reporting year
only, such as an ``implementation and benchmarking'' year which would
still allow CMS to collect data, but would not be used for financial
impacts in 2019. Other suggestions included utilizing 2017 as a beta
test year for MIPS eligible clinicians, plan capabilities and system
preparedness. The commenters believed that a staged approach to MACRA
implementation would provide for more coordinated change within the
delivery system for patients, which must remain a focus for all as we
continue embracing the Triple Aim of improving the patient experience
of care (including quality
[[Page 77081]]
and satisfaction); improving the health of populations; and reducing
the per capita cost of health care. More information regarding the
Triple Aim may be found at http://www.hhs.gov/about/strategic-plan/strategic-goal-1/.
Response: We would like to explain that MIPS is a program where
payment adjustments must be applied based on each MIPS eligible
clinician's total performance on measures and activities. As such, we
are not able to apply MIPS payment adjustments based on reporting
alone. Additionally, as we have discussed above, we have made
modifications to the performance period for the transition year of
MIPS, as well as to the scoring methodology, as discussed in section
II.E.6. of this final rule with comment period to allow MIPS eligible
clinicians the opportunity to gain experience under the program without
negative payment consequences.
Comment: Other commenters urged changes to MIPS to provide
flexibility for small practices. The commenters suggested a voluntary
phase-in for small practices over a several-year period. Alternatively,
the commenters suggested that CMS should not penalize very small
practices (for example, five or fewer MIPS eligible clinicians) for a
specified period of time, allowing them to implement and learn about
MIPS reporting. Another commenter suggested that for the transition
year of MIPS, CMS could permit small practices to be credited with full
participation in MIPS based on a single quarter of successfully
submitted 2017 data and permit larger practices to submit two quarters
of data.
Response: We have provided considerable flexibility for small
practices throughout our MIPS proposals and this final rule with
comment period. Specifically, we believe our modified low-volume
threshold policy, as discussed in section II.E.3.c. of this final rule
with comment period, will provide small groups considerable flexibility
that will address the commenters' concerns.
Comment: Some commenters were concerned with CMS statements from
the proposed rule--specifically, that MIPS eligible clinicians do not
have to begin reporting at the start of the performance period,
suggesting that MIPS eligible clinicians will have more time to collect
data, change workflows, and implement required MIPS and APM changes--
create confusion as many of the MIPS program's quality measures require
actions to be taken at the point of care and cannot be completed at a
later date.
Response: Our comments from the proposed rule accurately reflected
our proposed policies. We regret any confusion created by statements in
the proposals. The commenters are correct that many quality measures
are required to be reported for every encounter. It is also correct,
however, that other quality measures do not require reporting of every
encounter (that is, NQF 0043: Pneumonia Vaccination Status for Older
Adults). In general, the performance period is a window of time to
report measures and, depending on the measure, MIPS eligible clinicians
may need to report for just one quarter and the specified number of
encounters for a given measure, or may need multiple encounters in
multiple quarters for other measures
Comment: Some commenters stated that the proposal interrupts their
current short-term course of action of meeting Meaningful Use in 2016
and requested that we utilize 2017 as a preparation year to implement,
adopt, measure, monitor, and manage new measures and boost performance
on measures that previously had low thresholds for which MIPS eligible
clinicians have to maximize performance.
Response: We note that for those MIPS eligible clinicians who have
previously participated in the EHR Incentive Program, the measures and
objectives that are required under the advancing care information
performance category are a reduction in the number and types of
measures as previously required. More information on the advancing care
information performance category can be found in section II.E.5.g. of
this final rule with comment period.
Comment: There were various comments regarding the duration of the
MIPS performance period. Many commenters supported the 12-month
performance period and requested that CMS stick to that timeline. The
commenters stated that if timelines must be changed, CMS should do so
before the performance period begins. Several commenters supported the
performance period of one full year versus 90 days. They believed this
would lead to consistent and high-quality data submission. Another
commenter generally supported the proposed performance period but
cautioned CMS that any shortened performance periods could burden
certain MIPS eligible clinicians whose practices vary in volume based
on factors such as their geographies, specialties, and nature of the
patients they treat that are outside of their control. Other commenters
believed CMS should not delay the Quality Payment Program
implementation or finalize an abbreviated performance period in the
transition year. These commenters suggested that CMS act immediately on
the premise that implementation for 2017 should begin now with clear
education and guidance in order to ensure successful transitions to the
new Quality Payment Program.
Response: We appreciate the commenters' support. We believe that
measuring performance on a 12-month period is the most accurate and
reliable method for measuring a MIPS eligible clinician's performance.
We note that we are modifying our proposal to require reporting for a
minimum continuous 90-day period of time within the CY 2017 performance
period for the majority of available submission mechanisms for all data
in a given performance category and submission mechanism. However, we
strongly encourage all MIPS eligible clinicians to submit data for up
to the full calendar year if feasible for their practice. We anticipate
that MIPS eligible clinicians who are able to submit a more robust data
set, such as data on a 12-month period, will have the benefit of having
their full population of patients measured, which will assist these
MIPS eligible clinicians on their quality improvement goals.
Comment: Some commenters believed MACRA's four MIPS performance
categories are adding complexity to the delivery of patient-centered
care and do not increase the time medical clinicians spend with
patients. Specifically, the commenters believed that there is not much
of a difference between PQRS/MU and the new ``quality'' and ``advancing
care information'' performance categories. The commenters added that
the improvement activities performance category appears complicated and
the cost performance category is intensive. The commenters proposed a
solution that measurable elements be for a 90-day period during the
calendar year so that measuring tools will not need to be in place at
all times, resulting in less disruption and a greater focus on
patients.
Response: Our intention in creating MIPS is to provide a more
comprehensive and simplified system that provides value. The commenter
is correct that we maintained many elements of the PQRS and EHR
Incentive Program that we found through experience to be meaningful to
clinicians. The requirements for the cost and improvement activities
performance categories are described in sections II.E.5.e. and
II.E.5.f., respectively, of this final rule with comment period. We
believe these performance categories to be very low in burden. In
addition, as
[[Page 77082]]
described in section II.E.5.e of this final rule with comment period,
the cost performance category will account for 0 percent of the final
score in 2019 and we are redistributing the final score weight from
cost performance category to the quality performance category. Lastly,
as noted above, we are allowing MIPS eligible clinicians to report on
quality, improvement activities, and advancing care information
performance category information for a minimum of a continuous 90-day
period during the CY 2017 performance period for the majority of
available submission mechanisms for all data in a given performance
category and submission mechanism. In addition, the cost performance
category will be calculated based on the performance period using
administrative claims data. As a result, individual MIPS eligible
clinicians and groups will not be required to submit any additional
information for the cost performance category.
Comment: Another commenter believed a full year of quality
reporting is necessary to ensure data reliability for small practices
but encouraged CMS to finalize a 90-day performance period for the
improvement activities and advancing care information performance
categories. The commenter believed CMS could finalize a shorter
performance period for quality reporting in the future if 2015 data is
modeled to show sufficient reliability under a shorter performance
period.
Response: We agree with the commenter and believe that measuring
performance on a 12-month period is the most accurate method for
measuring a clinician's performance. However, for the transition year
of MIPS, we are providing flexibility while maintaining reliability and
finalizing a modified performance period, as discussed above, so that
MIPS eligible clinicians may familiarize themselves with MIPS
requirements.
Comment: Several commenters requested that CMS define the
performance period as less than a full year. The suggestions of the
start date were varied including: A suggested start date of July 1,
2017, which would allow MIPS eligible clinicians enough time to review
and select appropriate measures; a 9-month performance period of April
1 through December 31, 2017; a 90 day period from January 1st through
March 31st of each year because the commenter believed that this
shorter time frame would not differ significantly from a full-year
assessment period; and a period occurring from January 15 through April
15 so that reports could be compiled and tested prior to submission.
These commenters cited various concerns, including that full calendar
year reporting would be a significant departure from current reporting
requirements under the EHR Incentive Program and that it would not
allow for full validation and testing of EHR-generated data following
software upgrades or measurement specification changes. Other
commenters were concerned that the proposal to use a full calendar year
for the performance period could create administrative burden for
practices and limit innovation without improving the validity of the
data. The commenters recommended that in future years, CMS take
advantage of the flexibility granted under the MACRA statute to allow
MIPS eligible clinicians to select a shorter performance period for
either the MIPS program or APM incentive payments. Another commenter
believed that CMS should permit MIPS eligible clinicians to select a
shorter performance period if they believe it is more appropriate for
their practice.
Response: We do understand and appreciate the concerns raised by
commenters that the performance period for the transition year of the
program may be a shorter length than 12 months. For the transition year
of MIPS, we are providing flexibility while maintaining reliability and
finalizing a modified performance period, as discussed above, so that
MIPS eligible clinicians may familiarize themselves with MIPS
requirements.
Comment: A few commenters noted that measures for the cost
performance category may need to be calculated over a longer period of
time in order to ensure their reliability and applicability to
practices, and recommended that if CMS shortens the initial MIPS
performance period, CMS should make a distinction between performance
periods for performance categories where data submission is required
versus those where CMS calculates measures using administrative claims
data. The commenters suggested that CMS should conduct detailed
analysis of VM data to determine the extent to which including data for
a year rather than 6 or 9 months improves reliability and expands
applicability of the measures.
Response: We appreciate the commenters' suggestions. We have not
done an analysis to look at reliability of the measures using a 6-month
or 9-month performance period. We will consider this approach for
future rulemaking.
Comment: Another commenter recommended that CMS should also reduce
the case minimums for measures as MIPS eligible clinicians will not
have sufficient time to see the same number of patients during a
shortened performance period.
Response: We refer the commenter to section II.E.6.a.(2) of this
final rule with comment period where we discuss the quality scoring
proposals and the case minimum requirements.
Comment: Other commenters recommended a 90-day performance period
for 2017 for private specialty practices, as well as a 90-day
performance period for any reporting year that the practice is required
to upgrade their version of CEHRT. For example, the commenters noted
that in mid-2017, many MIPS eligible clinicians will be upgrading from
EHR technology certified to the 2014 Edition to EHR technology
certified to the 2015 Edition. The commenters stated that this can
often cause data integrity issues and would continuously place the
practice on a split CEHRT any year that this type of upgrade occurs.
They suggested a 90-day performance period during the upgrade year
would allow a practice to upgrade and attest to the most recent version
and standards.
Response: We are modifying our proposal to allow reporting for a
minimum of a continuous 90-day period of time within the CY 2017
performance period for the majority of available submission mechanisms
for all data in a given performance category and submission mechanism.
Additionally, we understand the commenters' concerns and rationale for
requesting a 90-day performance period. We note that for the first
performance period in 2017, we will accept a minimum of 90 days of data
within CY 2017, though we greatly encourage MIPS eligible clinicians to
meet the full year performance period. In order to allow MIPS eligible
clinicians and groups adequate time to transition to technology
certified to the 2015 Edition for use in CY 2018, we believe it is
appropriate to also allow a performance period of continuous 90-day
period within the CY for the advancing care information performance
category in CY 2018.
Comment: Another commenter requested that CMS offer advance notice
appropriate to the size of the change (for example, transitioning to
new editions of CEHRTs might require years of notice, whereas annually
updated benchmarks might require only a few months). The commenter
requested that the proposed policies not be implemented until at least
6 months after the final rule with comment period is published.
Response: We will provide as much advance notice as is necessary
when
[[Page 77083]]
making changes to the MIPS program. We recognize that all parties
involved in the MIPS program require advance notice to make adjustments
to accommodate changes.
Comment: Some commenters suggested that CMS shorten the performance
period to 9 months of the calendar year, followed by 3 months of data
analysis to calculate the scores and MIPS payment adjustments. The
rationale for this recommendation included allowing for a number of
program improvements, including reducing administrative burden in MIPS,
aligning the performance period across categories, shrinking the 2-year
lag period between performance and payment, and increased relevance and
timeliness of feedback. The commenters also stated that this would give
opportunity to set benchmarks based on more current data. Based on one
commenter's polling of its members, 92 percent preferred a performance
period of any 90 consecutive days compared to the proposed performance
period.
Response: We considered utilizing a 9-month performance period as
the commenter recommended, however we did not utilize this option since
this would still require a ``2-year lag'' to account for the post
submission processes of calculating the MIPS eligible clinician's final
score, establishing budget neutrality and issuing the payment
adjustment factors and allowing for a targeted review period to occur
prior to the application of the MIPS payment adjustment to MIPS
eligible clinicians claims. As stated above, we are modifying our
proposal and finalizing that MIPS eligible clinicians will only need to
report for a minimum of a continuous 90-day period in 2017, for the
majority of the data submission mechanisms. We believe this flexibility
will allow for a number of program improvements, including reducing
administrative burden in MIPS for the transition year and will align
across the quality, advancing care information, and improvement
activities performance categories. In addition, we will continue
working with stakeholders to improve feedback provisions under MIPS and
to shorten the ``2-year lag'' that the commenter describes.
Comment: One commenter stated that they recognized a shorter
performance period may present challenges for CMS systems and
processes; therefore, they urged CMS to work with MIPS eligible
clinicians to develop options and a specific plan to provide
accommodations where possible.
Response: We appreciate the comment and will continue to work
closely with stakeholders throughout the Quality Payment Program.
Comment: Other commenters believed a shorter performance period
would eliminate the participation burden and confusion for MIPS
eligible clinicians who may switch practices mid-year and have to track
and report data for multiple TIN/NPI combinations under the proposed
full calendar year performance period.
Response: We agree with the commenter that the shortened minimum
continuous 90-day period of time will assist in decreasing
participation burden. We note that the modified performance period will
not eliminate the need for tracking multiple TIN/NPIs depending upon
the specific circumstances of the MIPS eligible clinician, but we agree
with the commenter that it will mitigate this issue.
Comment: A few commenters recommended a 6-month performance period
for MIPS with an optional look-back period for registries to increase
sample size, validity and reliability and an extension of data
submissions for QCDRs to April 31 following the performance period, or
4 months after the performance period to allow for the capture and
analytics required for the use of risk-adjusted outcomes data.
Response: Our modified proposal of a continuous 90-day period
within the CY 2017 performance period for all data in a given
performance category and submission mechanism is a minimum period and
we strongly encourage all MIPS eligible clinicians to report on data
for a full year where possible for their practice. We believe this
policy will address the commenters' concerns while maintaining
reliability. Our policies regarding the performance period are
described in more detail in section II.E.4. of this final rule with
comment period. We note that it is not clear how a longer data
submission timeframe will help with the capture of risk-adjusted data
elements used in outcomes measures. In most, if not all, instances, any
co-morbidities affecting the outcome for a patient would be known
before or at the time the care is rendered.
Comment: One commenter suggested that if CMS rejects changing the
initial performance period for 2017 to 90 days, it should implement
preliminary and f-Final performance periods, with analysis periods
(from January to March) and implementation periods (from April to May),
to allow MIPS eligible clinicians to evaluate their performance with
the various MIPS requirements from August to September, followed by a
final performance period from October to December.
Response: We thank the commenter for their feedback. As discussed
above, we are modifying our proposal to allow reporting for a minimum
of a continuous 90-day period within the CY 2017 performance period for
the majority of available submission mechanisms for all data in a given
performance category and submission mechanism.
Comment: Many commenters stated that CMS must work to reduce the 2-
year gap between the performance period and the payment year because it
is burdensome, is not meaningful nor actionable as MIPS eligible
clinicians will not know what they must adjust to meet benchmarks, and
it hinders timely data reporting and feedback. One commenter
acknowledged the operational difficulty associated with having
performance periods close to MIPS payment adjustment periods, but
requested that CMS work to shorten the look back period between
performance assessment and adjustment.
Response: We agree with commenters that improved feedback
mechanisms are always important, and we will continue working with
stakeholders to provide timely and better feedback under MIPS and to
shorten the ``2-year gap'' that the commenter describes.
Comment: There were various suggestions on the most appropriate
time gap between the performance period and the payment year. Several
commenters suggested that a 1-year gap would be more appropriate and
others proposed a 6-month time gap. Another commenter believed, that
the time lag of essentially 2 years between the performance period and
the payment year severely disadvantages MIPS eligible clinicians
falling below the top tier performance threshold and inflates the
rating of competing MIPS eligible clinicians, who can rest on the
laurels of their prior performance years. Further, the commenter noted
that if a MIPS eligible clinician had an unsatisfactory performance
rating, (for example, from data collected in January of 2016), and took
corrective action to earn a higher rating, the efforts of that
corrective action would not be available to the public for a minimum of
2 years. A few commenters believed CMS should increase the relevance
and timeliness of data, which could be provided on a quarterly basis.
Response: We appreciate the commenters' feedback. We agree with the
commenters that a delay between the performance period and the MIPS
payment adjustment year impacts the clinicians' ability to make timely
[[Page 77084]]
improvements within their practice. For the initial years of MIPS, we
do anticipate that this gap between the performance period and the
payment adjustment year will continue to occur to allow time for
submission and calculation of data, issuance of feedback, a targeted
review period, calculation of final scores, and application of
clinician-specific MIPS adjustments in time for the payment year.
Comment: Other commenters believed CMS should use language
clarifying that the MIPS performance period begins on January 1, 2017.
The commenters suggested linking the language for the performance year
with the adjustment year in some way (for example, ``MIPS 2017/19'',
``2017 performance period (2019)'').
Response: We will ensure that all communications clearly indicate
the link between the performance period and the MIPS payment adjustment
year.
Comment: A few commenters expressed support for CMS' proposal of a
90-day claims data run-out. Another commenter stated that if the
proposed window is not feasible, the commenter supported a 60-day
window.
Response: We appreciate the commenter's feedback. Based on further
analyses of Medicare Part B claims for 2014, we have determined that
there is only a 0.5 percent difference in claims processing
completeness when using 90 days rather than 60 days. Therefore, we are
finalizing our alternative proposal at Sec. 414.1325(f)(2) that the
submission deadline for Medicare Part B claims, must be on claims with
dates of service during the performance period that must be processed
no later than 60 days following the close of the performance period.
Comment: Another commenter requested more information regarding how
MIPS eligible clinicians participating for part of the performance
period will be assessed against MIPS eligible clinicians participating
for the full performance period. The commenter cautioned against
penalizing MIPS eligible clinicians not practicing for reasons beyond
their control, such as for health reasons. Other commenters expressed
concern that MIPS eligible clinicians could attempt to game the system
with extended leave. Other commenters supported the expectations for
reporting when MIPS eligible clinicians have a break in their practice,
and one commenter expressed concern about MIPS eligible clinicians who
change groups because doing so may negatively impact group performance.
The commenters believed a policy for exceptions may mitigate the
problem and provide consistency. Another commenter stated that MIPS
eligible clinicians with less than 12 months of performance data should
be assessed on the period of time for which they do report.
Response: As discussed in this final rule with comment period, we
are modifying our proposal to allow reporting for a minimum of a
continuous 90-day period within the CY 2017 performance period for the
majority of available submission mechanisms for all data in a given
performance category and submission mechanism. We would like to note
that we are finalizing that individual MIPS eligible clinician or
groups who report less than 12 months of data (due to family leave,
etc.) would be required to report all performance data available from
the performance period. For example, for the performance period in
2017, MIPS eligible clinicians who have less than 90 days' worth of
data would be required to submit all performance data that they have
available. We are finalizing this proposal with modification to apply
to any applicable performance period (for example, to any 90-day
period). Based on the Medicare Part B data available to us, we do not
intend to make any scoring adjustments based on the duration of the
performance period. We recognize that a longer (that is, 12-month)
performance period provides greater assurance of reliability with
respect to the submitted data and therefore strongly encourage all MIPS
eligible clinicians who have the ability to submit data for a period
greater than 90 days, to do so.
Comment: A few commenters supported the proposed performance
period, but requested that CMS increase its outreach to MIPS eligible
clinicians who have not successfully reported under PQRS in the past to
help them to achieve the reporting standard during this time. A few
commenters stated that going forward CMS should ensure that the
timeframes for annual MACRA regulations, subregulatory guidance and
other agency communications are sufficient to allow MIPS eligible
clinicians and health plans to act on the information in advance of the
applicable performance years. For purposes of publishing the list of
APMs, Medical Home Models, MIPS APMs, Advanced APMs, and eventually
other-payer APMs, the commenter believed that CMS should start the
process at least 15 months in advance of the applicable performance
year, and finalize the list at least 9 months in advance of the
applicable performance year.
Response: We appreciate the support. We have multiple mechanisms we
have employed to reach out to all MIPS eligible clinicians to provide
support. We will make every effort to ensure the timeframes for agency
communications are sufficient to allow MIPS eligible clinicians and
health plans to act on the information in advance of the applicable
performance period. Please refer to section II.F.4. of this final rule
with comment period for further information on how we will make clear
the status of any APM upon its first public announcement.
Comment: Other commenters urged CMS to communicate submission
problems to both vendors and practices as soon as possible to allow for
alternative submission mechanisms and to encourage vendors to be open
about their ability to meet data submission standards.
Response: We make every effort to communicate submission problems
to stakeholders through multiple communication channels including
health IT vendors, specialty societies, registries, and MIPS eligible
clinicians as soon as possible and will continue to do so in the
future.
Comment: One commenter supported using claims paid within 60 days
after the performance period.
Response: We agree and appreciate the commenters support. We are
finalizing our proposal to use claims that are processed within 60
days, after the end of the performance period for purposes of assessing
performance and computing the MIPS payment adjustment.
After consideration of the comments we received regarding the MIPS
performance period, we are finalizing a modification of our proposal of
a 12-month performance period that occurs 2 years prior to the
applicable payment year. For the transition year of MIPS, we believe it
is important that we provide flexibility to MIPS eligible clinicians as
they familiarize themselves with MIPS requirements while maintaining
reliability. Therefore, we are finalizing at Sec. 414.1320(a)(1) that
for purposes of the 2019 MIPS payment year, for all performance
categories and submission mechanisms except for the cost performance
category and data for the quality performance category reported through
the CMS Web Interface, for the CAHPS for MIPS survey, and for the all-
cause hospital readmission measure, the performance period under MIPS
is a minimum of a continuous 90-day period within CY 2017, up to and
including the full CY (January 1, 2017 through December 31, 2017).
Thus, MIPS eligible clinicians will only need to report for a minimum
of a continuous 90-day period within CY 2017, for the majority of the
[[Page 77085]]
submission mechanisms. This 90-day period can occur anytime within CY
2017, so long as the 90-day period begins on or after January 1, 2017,
and ends on or before December 31, 2017. Additionally, for further
flexibility and ease of reporting this 90-day period can differ across
performance categories. For example, a MIPS eligible clinician may
utilize a 90-day period that spans from June 1, 2017-August 30, 2017
for the improvement activities performance category and could use a
different 90-day period for the quality performance category, such as
August 15, 2017-November 13, 2017. The continuous 90-day period is a
minimum; MIPS eligible clinicians may elect to report data on more than
a continuous 90-day period, including a period of up to the full 12
months of 2017. We note there are special circumstances in which MIPS
eligible clinicians may submit data for a period of less than 90 days
and avoid a negative MIPS payment adjustment. For example, in some
circumstances, MIPS eligible clinicians may meet data completeness
criteria for certain quality measures in less than the 90-day period.
Also, in instances where MIPS eligible clinicians do not meet the data
completeness criteria for quality measures, we will provide partial
credit for these measures as discussed in section II.E.6. of this final
rule with comment period.
For groups that elect to utilize the CMS Web Interface or report
the CAHPS for MIPS survey, we note that these submission mechanisms
utilize certain assignment and sampling methodologies that are based on
a 12-month period. In addition, administrative claims-based measures
(this includes all of the cost measures and the all-cause readmission
measure) are based on attributed population using the 12-month
performance period. Accordingly, we are finalizing at Sec.
414.1320(a)(2) that for purposes of the 2019 MIPS payment year, for
data reported through the CMS Web Interface or the CAHPS for MIPS
survey and administrative claims-based cost and quality measures, the
performance period under MIPS is CY 2017 (January 1, 2017 through
December 31, 2017). Please note that, unless otherwise stated, any
reference in this final rule with comment period to the ``CY 2017
performance period'' is intended to be an inclusive reference to all
performance periods occurring during CY 2017.
Additionally, we are finalizing at Sec. 414.1320(b)(1) that for
purposes of the 2020 MIPS payment year, the performance period for the
quality and cost performance categories is CY 2018 (January 1, 2018
through December 31, 2018). For the improvement activities and
advancing care information performance categories, we are finalizing
the same approach for the 2020 MIPS payment year that we will have in
place for the transition year of MIPS. Specifically, we are finalizing
at Sec. 414.1320(b)(2) that for purposes of the 2020 MIPS payment
year, the performance period for the improvement activities and
advancing care information performance categories is a minimum of a
continuous 90-day period within CY 2018, up to and including the full
CY 2018 (January 1, 2018 through December 31, 2018).
We are also finalizing a modification to our proposal, which was to
use claims run-out data that are processed within 90 days, if
operationally feasible, after the end of the performance period for
purposes of assessing performance and computing the MIPS payment
adjustment. Specifically, we are finalizing at Sec. 414.1325(f)(2) to
use claims with dates of service during the performance period that
must be processed no later than 60 days following the close of the
performance period for purposes of assessing performance and computing
the MIPS payment adjustment.
Lastly, we are finalizing our proposal that individual MIPS
eligible clinicians or groups who report less than 12 months of data
(due to family leave, etc.) would be required to report all performance
data available from the applicable performance period (for example, to
any 90-day period).
5. MIPS Performance Category Measures and Activities
a. Performance Category Measures and Reporting
(1) Statutory Requirements
Section 1848(q)(2)(A) of the Act requires the Secretary to use four
performance categories in determining each MIPS eligible clinician's
final score under the MIPS: Quality; cost; improvement activities; and
advancing care information. Section 1848(q)(2)(B) of the Act, subject
to section 1848(q)(2)(C) of the Act, describes the measures and
activities that, for purposes of the MIPS performance standards, must
be specified under each performance category for a performance period.
Section 1848(q)(2)(B)(i) of the Act describes the measures and
activities that must be specified under the MIPS quality performance
category as the quality measures included in the annual final list of
quality measures published under section 1848(q)(2)(D)(i) of the Act
and the list of quality measures described in section 1848(q)(2)(D)(vi)
of the Act used by QCDRs under section 1848(m)(3)(E) of the Act. Under
section 1848(q)(2)(C)(i) of the Act, the Secretary must, as feasible,
emphasize the application of outcome-based measures in applying section
1848(q)(2)(B)(i) of the Act. Under section 1848(q)(2)(C)(iii) of the
Act, the Secretary may also use global measures, such as global outcome
measures and population-based measures, for purposes of the quality
performance category. Section 1848(q)(2)(B)(ii) of the Act describes
the measures and activities that must be specified under the cost
performance category as the measurement of cost for the performance
period under section 1848(p)(3) of the Act, using the methodology under
section 1848(r) of the Act as appropriate, and, as feasible and
applicable, accounting for the cost of drugs under Part D.
Section 1848(q)(2)(C)(ii) of the Act allows the Secretary to use
measures from other CMS payment systems, such as measures for inpatient
hospitals, for purposes of the quality and cost performance categories,
except that the Secretary may not use measures for hospital outpatient
departments, other than in the case of items and services furnished by
emergency physicians, radiologists, and anesthesiologists. In the
proposed rule, we solicited comment on how it might be feasible and
when it might be appropriate to incorporate measures from other systems
into MIPS for clinicians that work in facilities such as inpatient
hospitals. For example, it may be appropriate to use such measures when
other applicable measures are not available for individual MIPS
eligible clinicians or when strong payment incentives are tied to
measure performance, either at the facility level or with employed or
affiliated MIPS eligible clinicians.
Section 1848(q)(2)(B)(iii) of the Act describes the measures and
activities that must be specified under the improvement activities
performance category as improvement activities under subcategories
specified by the Secretary for the performance period, which must
include at least the subcategories specified in section
1848(q)(2)(B)(iii)(I) through (VI) of the Act. Section
1848(q)(2)(C)(v)(III) of the Act defines a improvement activities as an
activity that relevant eligible clinician organizations and other
relevant stakeholders identify as improving clinical practice or care
delivery and that the Secretary determines, when effectively executed,
is likely to result in improved outcomes. Section 1848(q)(2)(B)(iii) of
the Act
[[Page 77086]]
requires the Secretary to give consideration to the circumstances of
small practices (consisting of 15 or fewer professionals) and practices
located in rural areas and geographic HPSAs in establishing improvement
activities.
Section 1848(q)(2)(B)(iv) of the Act describes the measures and
activities that must be specified under the advancing care information
performance category as the requirements established for the
performance period under section 1848(o)(2) for determining whether an
eligible clinician is a meaningful EHR user.
As discussed in the proposed rule (81 FR 28173), section
1848(q)(2)(C)(iv) of the Act requires the Secretary to give
consideration to the circumstances of non-patient facing MIPS eligible
clinicians in specifying measures and activities under the MIPS
performance categories and allows the Secretary, to the extent feasible
and appropriate, to take those circumstances into account and apply
alternative measures or activities that fulfill the goals of the
applicable performance category. In doing so, the Secretary is required
to consult with non-patient facing professionals.
Section 101(b) of MACRA amends certain provisions of section
1848(k), (m), (o), and (p) of the Act to generally provide that the
Secretary will carry out such provisions in accordance with section
1848(q)(1)(F) of the Act for purposes of MIPS. Section 1848(q)(1)(F) of
the Act provides that, in applying a provision of section 1848(k), (m),
(o), and (p) of the Act for purposes of MIPS, the Secretary must adjust
the application of the provision to ensure that it is consistent with
the MIPS requirements and must not apply the provision to the extent
that it is duplicative with a MIPS provision.
We did not request comments on this section, but we did receive a
few comments which are summarized below.
Comment: Some commenters requested that MIPS begin in its most
basic structure involving as few measures as possible due to the fact
that the practices have little or no experience in these processes and
very limited staff, particularly in smaller practices. Another
commenter recommended that CMS reduce the number of MIPS measures
across the four performance categories. The commenter expressed concern
that the implementation time will be slow due to developing
relationships with data submission vendors which will lead to practices
being overwhelmed by the number of measures.
Some commenters suggested that instead of focusing on four
performance categories simultaneously, CMS should focus on
interoperability and making that functionality fully workable before
moving on to the next step.
One commenter was very concerned that the cumulative effect of four
sets of largely separate measures and activities, scoring
methodologies, and reporting requirements could result in more
administrative work for practices, not less, and encouraged CMS to
consider additional ways to reduce the MIPS reporting burden for all
practices such as reducing the number of required measures or
activities in each MIPS performance category, lowering measure
thresholds, establishing consistent definitions (such as for ``small
practices'') across categories, and providing more opportunities for
``partial credit.'' Other commenters urged CMS to take every possible
step to dramatically simplify provisions and requirements, and to
revise and develop practice-focused communications to reduce any
remaining perceived complexity.
Another commenter agreed with the level of flexibility CMS has
proposed for MIPS eligible clinicians by allowing them to choose the
specific quality performance measures most applicable to their practice
and stated that CMS should design the requirements within the
performance categories to work in concert with each other to ensure
meaningful quality measurement. Some commenters asked if there will be
interoperability between the four MIPS performance categories.
Response: As discussed in section II.E.5.b.(3) of this final rule
with comment period, we have decreased the data submission criteria for
the quality performance category to a level that reduces burden while
still maintaining meaningful measurements at this time. We will
continue to assess this approach to improve on this aspect in the
future. We appreciate the commenters' request for simplicity and the
need for clear communications. We will continue to look for ways to
simplify the MIPS program in the future and will work to ensure clear
communications with the MIPS eligible clinician community on all of the
MIPS provisions. We note that the definition of a small practice is the
same across all four performance categories and is consistent with the
statute. We have codified the definition of a small practice for MIPS
at Sec. 414.1305 as practices consisting of 15 or fewer clinicians and
solo practitioners.
Further, we are required by statute to utilize the four performance
categories to determine the final score. We appreciate the support and
agree that the goal of the MIPS program is that the four performance
categories should work in concert with one another. In addition, as
discussed in section II.E.5. of this final rule with comment period, we
have modified our policies to have the four performance categories work
more in concert with one another.
Comment: One commenter requested that CMS simplify the MIPS to the
extent practicable by further limiting the number of measures
reportable under each performance category and refraining from
introducing any new and previously untested measures (for example,
population-based quality measures).
Response: In any quality measurement program, we must balance the
data collection burden that we must impose on MIPS eligible clinicians
with the resulting quality performance data that we will receive. We
believe that without sufficiently robust performance data, we cannot
accurately measure quality performance. Therefore, we believe that we
have appropriately struck a balance between requiring sufficient
quality measure data from MIPS eligible clinicians and ensuring robust
quality measurement at this time. Regarding the global and population-
based measures, we refer the reader to section II.E.5.b.(6) of this
final rule with comment period.
Comment: One commenter stated that CMS appears to view the four
MIPS categories as separate but should treat them holistically. The
commenter suggested unifying definitions across all MIPS categories,
such as the proposed definition of a ``small practice'' as consisting
of 15 or fewer clinicians.
Response: We are required by statute to utilize the four
performance categories to determine the final score. As the program
evolves we believe the performance categories will become more
streamlined and integrated. The definition of a small practice is the
same across all four performance categories and is consistent with the
statute. We have codified the definition of a small practice for MIPS
at Sec. 414.1305 as practices consisting of 15 or fewer clinicians and
solo practitioners.
Comment: Some commenters suggested combining the improvement
activities and advancing care information performance categories.
Response: Each of these performance categories is statutorily
mandated, and we believe each has a distinct role in the MIPS program.
Comment: Another commenter stated that data and reporting
requirements should generally be efficient, strong,
[[Page 77087]]
and actionable for the purposes of quality improvement, payment,
consumer decision-making, and any other areas where they can be useful.
Another commenter generally recommended that quality measures in the
MIPS program be meaningful, that innovative science should be
accommodated when achieving quality aims in areas without measures or
therapies, and incentives surrounding cost should reward high-value
care, not simply low cost.
Response: We appreciate the commenters' support.
We have considered the comments received and will take them into
consideration in the future development of performance feedback through
separate notice-and-comment rulemaking.
(2) Submission Mechanisms
We proposed at Sec. 414.1325(a) that individual MIPS eligible
clinicians and groups would be required to submit data on measures and
activities for the quality, improvement activities and advancing care
information performance categories. We did not propose at Sec.
414.1325(f) any data submission requirements for the cost performance
category and for certain quality measures used to assess performance on
the quality performance category and for certain activities in the
improvement activities performance category. For the cost performance
category, we proposed that each individual MIPS eligible clinician's
and group's cost performance would be calculated using administrative
claims data. As a result, individual MIPS eligible clinicians and
groups would not be required to submit any additional information for
the cost performance category. In addition, we would be using
administrative claims data to calculate performance on a subset of the
MIPS quality measures and the improvement activities performance
category, if technically feasible. For this subset of quality measures
and improvement activities, MIPS eligible clinicians and groups would
not be required to submit additional information. For individual
clinicians and groups that are not MIPS eligible clinicians, such as
physical therapists, but elect to report to MIPS, we would calculate
administrative claims cost measures and quality measures, if data are
available. We proposed multiple data submission mechanisms for MIPS as
outlined in Tables 1 and 2 in the proposed rule (81 FR 28182) and the
final policies identified in Tables 3 and 4 in this final rule with
comment period, to provide MIPS eligible clinicians with flexibility to
submit their MIPS measures and activities in a manner that best
accommodates the characteristics of their practice. We note that other
terms have been used for these submission mechanisms in earlier
programs and in industry.
Table 1--Proposed Data Submission Mechanisms for MIPS Eligible
Clinicians Reporting Individually as TIN/NPI
------------------------------------------------------------------------
Performance category/submission Individual reporting data submission
combinations accepted mechanisms
------------------------------------------------------------------------
Quality........................... Claims.
QCDR.
Qualified registry.
EHR.
Administrative claims (no submission
required).
Cost.............................. Administrative claims (no submission
required).
Advancing Care Information........ Attestation.
QCDR.
Qualified registry.
EHR.
Improvement Activities............ Attestation.
QCDR.
Qualified registry.
EHR.
Administrative claims (if
technically feasible, no submission
required).
------------------------------------------------------------------------
Table 2--Proposed Data Submission Mechanisms for Groups
------------------------------------------------------------------------
Performance category/submission Group Reporting data submission
combinations accepted mechanisms
------------------------------------------------------------------------
Quality........................... QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25 or
more).
CMS-approved survey vendor for CAHPS
for MIPS (must be reported in
conjunction with another data
submission mechanism.)
and
Administrative claims (no submission
required).
Cost.............................. Administrative claims (no submission
required).
Advancing Care Information........ Attestation.
QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25 or
more).
[[Page 77088]]
Improvement Activities............ Attestation.
QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25 or
more).
Administrative claims (if
technically feasible, no submission
required).
------------------------------------------------------------------------
We proposed at Sec. 414.1325(d) that MIPS eligible clinicians and
groups may elect to submit information via multiple mechanisms;
however, they must use the same identifier for all performance
categories and they may only use one submission mechanism per
performance category. For example, a MIPS eligible clinician could use
one submission mechanism for sending quality measures and another for
sending improvement activities data, but a MIPS eligible clinician
could not use two submission mechanisms for a single performance
category such as submitting three quality measures via claims and three
quality measures via registry. We believe the proposal to allow
multiple mechanisms, while restricting the number of mechanisms per
performance category, offers flexibility without adding undue
complexity.
For individual MIPS eligible clinicians, we proposed at Sec.
414.1325(b), that an individual MIPS eligible clinician may choose to
submit their quality, improvement activities, and advancing care
information performance category data using qualified registry, QCDR,
or EHR submission mechanisms. Furthermore, we proposed at Sec.
414.1400 that a qualified registry, health IT vendor, or QCDR could
submit data on behalf of the MIPS eligible clinician for the three
performance categories: Quality, improvement activities, and advancing
care information. In the proposed rule (81 FR 28280), we expanded third
party intermediaries' capabilities by allowing them to submit data and
activities for quality, improvement activities, and advancing care
information performance categories. Additionally, we proposed at Sec.
414.1325(b)(4) and (5) that individual MIPS eligible clinicians may
elect to report quality information via Medicare Part B claims and
their improvement activities and advancing care information performance
category data through attestation.
For groups that are not reporting through the APM scoring standard,
we proposed at Sec. 414.1325(c) that these groups may choose to submit
their MIPS quality, improvement activities, and advancing care
performance category information data using qualified registry, QCDR,
EHR, or CMS Web Interface (for groups of 25+ MIPS eligible clinicians)
submission mechanisms. Furthermore, we proposed at Sec. 414.1400 that
a qualified registry, health IT vendor that obtains data from a MIPS
eligible clinician's CEHRT, or QCDR could submit data on behalf of the
group for the three performance categories: Quality, improvement
activities, and advancing care information. Additionally, we proposed
that groups may elect to submit their improvement activities or
advancing care information performance category data through
attestation.
For those MIPS eligible clinicians participating in an APM that
uses the APM scoring standard, we refer readers to the proposed rule
(81 FR 28234), which describes how certain APM Entities submit data to
MIPS, including separate approaches to the quality and cost performance
categories for APMs.
We proposed one exception to the requirement for one reporting
mechanism per performance category. Groups that elect to include CAHPS
for MIPS survey as a quality measure must use a CMS-approved survey
vendor. Their other quality information may be reported by any single
one of the other proposed submission mechanisms.
While we proposed to allow MIPS eligible clinicians and groups to
submit data for different performance categories via multiple
submission mechanisms, we encouraged MIPS eligible clinicians to submit
MIPS information for the improvement activities and advancing care
information performance categories through the same reporting mechanism
that is used for quality reporting. We believe it would reduce
administrative burden and would simplify the data submission process
for MIPS eligible clinicians by having a single reporting mechanism for
all three performance categories for which MIPS eligible clinicians
would be required to submit data: Quality, improvement activities, and
advancing care information performance category information. However,
we were concerned that not all third party entities would be able to
implement the changes necessary to support reporting on all performance
categories in the transition year. We solicited comments for future
rulemaking on whether we should propose requiring health IT vendors,
QCDRs, and qualified registries to have the capability to submit data
for all MIPS performance categories.
As noted at (81 FR 28181), we proposed that MIPS eligible
clinicians may report measures and activities using different
submission methods for each performance category if they choose for
reporting data for the CY 2017 performance period. As we gain
experience under MIPS, we anticipate that in future years it may be
beneficial for, and reduce burden on MIPS eligible clinicians and
groups, to require data for multiple performance categories to come
through a single submission mechanism.
Further, we will be flexible in implementing MIPS. For example, if
a MIPS eligible clinician does submit data via multiple submission
mechanisms (for example, registry and QCDR), we would score all the
measures in each submission mechanism and use the highest performance
score for the MIPS eligible clinician or group as described at (81 FR
28247). However, we would not be blending measure results across
submission mechanisms. We encourage MIPS eligible clinicians to report
data for a given performance category using a single data submission
mechanism.
Finally, section 1848(q)(1)(E) of the Act requires the Secretary to
encourage the use of QCDRs under section 1848(m)(3)(E) of the Act in
carrying out MIPS. Section 1848(q)(5)(B)(ii)(I) of the Act requires the
Secretary, under the final score methodology, to encourage MIPS
eligible clinicians to report on applicable measures with respect to
the quality performance category through the use of CEHRT and QCDRs. We
note that the proposed rule used the term CEHRT and certified health IT
in different contexts. For an explanation of these terms and contextual
use within
[[Page 77089]]
the proposed rule, we refer readers to the proposed rule (81 FR 28256).
We have multiple policies to encourage the usage of QCDRs and
CEHRT. In part, we are promoting the use of CEHRT by awarding bonus
points in the quality scoring section for measures gathered and
reported electronically via the QCDR, qualified registry, CMS Web
Interface, or CEHRT submission mechanisms see the proposed rule (81 FR
28247). By promoting the use of CEHRT through various submission
mechanisms, we believe MIPS eligible clinicians have flexibility in
implementing electronic measure reporting in a manner which best suits
their practice.
To encourage the use of QCDRs, we have created opportunities for
QCDRs to report new and innovative quality measures. In addition,
several improvement activities emphasize QCDR participation. Finally,
we allow for QCDRs to report data on all MIPS performance categories
that require data submission and hope this will become a viable option
for MIPS eligible clinicians. We believe these flexible options will
allow MIPS eligible clinicians to more easily meet the submission
criteria for MIPS, which in turn will positively affect their final
score.
We requested comments on these proposals.
The following is summary of the comments we received on our
proposals regarding MIPS data submission mechanisms.
Comment: Several commenters expressed concern that, by providing
too many data submission mechanisms and reporting flexibility to MIPS
eligible clinicians, CMS would be allowing MIPS eligible clinicians to
report on arbitrary quality metrics or metrics on which those MIPS
eligible clinicians are performing well versus metrics that reflect
areas of needed improvement. The commenters recommended that CMS ensure
high standard final scoring, promote transparency, and enable
meaningful comparisons of the clinicians' performance for specific
services.
Response: We believe allowing multiple data submission mechanisms
is beneficial to the MIPS eligible clinicians as they may choose
whichever data submission mechanism works best for their practice. We
have provided many data submission options to allow the utmost
flexibility for the MIPS eligible clinician. Based on our experience
with existing quality reporting programs such as PQRS, we do not
believe multiple data submission mechanisms will encourage MIPS
eligible clinicians to report on arbitrary quality metrics or metrics
on which those MIPS eligible clinicians are performing well versus
metrics that reflect areas of needed improvement. We will monitor
measure selection and performance through varying data submission
mechanisms as we implement the program. However, we agree with
commenters that measuring meaningful quality measures and encouraging
improvement in the quality of care are important goals of the MIPS
program. As such, we will monitor whether data submission mechanisms
are allowing MIPS eligible clinicians to focus only on metrics where
they are already performing well and will address any modifications
needed to our policies based on these monitoring efforts in future
rulemaking.
Comment: Another commenter supported the requirement to use only
one submission mechanism per performance category. Other commenters
appreciated that CMS is allowing MIPS eligible clinicians to choose
data submission options that vary by performance category.
Response: We agree with the commenters and appreciate the support.
We are finalizing the policy as proposed of requiring MIPS eligible
clinicians to submit all performance category data for a specific
performance category via the same data submission mechanism. In
addition, we are finalizing the policy to allow MIPS eligible
clinicians to submit data using differing submission mechanisms across
different performance categories. We refer readers to section
II.E.5.a.(2) of this final rule with comment period where we discuss
our approach for the rare situations where a MIPS eligible clinician
submits data for a performance category via multiple submission
mechanisms (for example, submits data for the quality performance
category through a registry and QCDR), and how we score those MIPS
eligible clinicians. We further note that in that section we are
seeking comment for further consideration on different approaches for
addressing this scenario.
Comment: Another commenter sought clarification as to whether MIPS
eligible clinicians may use more than one data submission method per
performance category. The commenter recommended the use of multiple
data submission methods across performance categories because there are
currently significant issues with extracting clinical data from EHRs to
provide to a third party for calculation. The commenter believed that
requiring a single submission method may force MIPS eligible clinicians
to submit inaccurate data that does not reflect actual performance.
Response: As noted in this final rule with comment period, MIPS
eligible clinicians will have the flexibility to choose different
submission mechanisms across different performance categories for
example, utilizing a registry to submit data for quality and CEHRT for
the advancing care information performance category. MIPS eligible
clinicians will need to choose however, one submission mechanism per
performance category, except for MIPS eligible clinicians who elect to
report the CAHPS for MIPS survey, which must be reported via a CMS-
approved survey vendor in conjunction with another submission mechanism
for all other quality measures. As discussed in this section of this
final rule with comment period, we are finalizing policy that allows
MIPS eligible clinicians to choose to report for a minimum of as few as
90 consecutive days within CY 2017 for the majority of submission
mechanisms. We believe this allows for adequate time for those MIPS
eligible clinicians who are not already successfully reporting quality
measures meaningful to their practice via CEHRT under the EHR Incentive
Program and/or PQRS to evaluate their options and select the measures
and a reporting mechanism that will work best for their practice. We
will be providing subregulatory guidance for MIPS eligible clinicians
who encounter issues with extracting clinical data from EHRs.
Comment: A few commenters recommended that CMS reduce complexity by
reducing the number of available reporting methods as health IT reduces
the need to retain claims and registry-based reporting in the program.
Other commenters supported the use of electronic data reporting
mechanisms noted that due to the complexity of the MIPS, they were
concerned that using claims data submission for quality measures may
place MIPS eligible clinicians at a disadvantage due to the significant
lag between performance feedback and the performance period.
Response: We appreciate the commenters' feedback. We agree that the
usage of health IT in the future will reduce our reliance on non-IT
methods of reporting such as claims. We do believe, however, that we
cannot eliminate submission mechanisms such as claims until broader
adoption of health IT and registries occurs. Therefore, we do intend to
finalize both the claims and registry submission mechanisms. We also
refer readers to section II.E.8.a. for final polices regarding
performance feedback.
[[Page 77090]]
Comment: Some commenters expressed appreciation for our proposal to
continue claims-based reporting for the quality performance category
because this is the most convenient method for hospitals-based
clinicians. The commenters explained that hospital-based MIPS eligible
clinicians must use the EHRs of the hospitals in which they practice,
which may limit the capabilities of these EHRs for reporting measures.
Other commenters requested that CMS ensure that the option for claims
reporting was available to all MIPS eligible clinicians, noting that
there was only one anesthesia-related quality measure available for
reporting via registry. Under such circumstances, the commenters asked
CMS to ensure that MIPS did not impose excessive time and cost burdens
on MIPS eligible clinicians by forcing them to use a different
submission mechanism. Another commenter noted that the preservation of
the claims-based reporting option will help those emergency medicine
practices that have relied on this reporting option in the past make
the transition to the new MIPS requirements. The commenter noted the
additional administrative burden associated with registry reporting,
including registration fees.
Response: We appreciate the commenters' support. We do note that we
intend to reduce the number of claims-based measures in the future as
more measures are available through health IT mechanisms such as
registries, QCDRs, and health IT vendors, but we understand that many
MIPS eligible clinicians still submit these types of measures. We
believe claims-based measures are a necessary option to minimize
reporting burden for MIPS eligible clinicians at this time. We intend
to work with MIPS eligible clinicians and other stakeholders to
continue improving available measures and reporting methods for MIPS.
In addition, we are finalizing policies that offer MIPS eligible
clinicians substantial flexibility and sustain proven pathways for
successful participation. Those MIPS eligible clinicians who are not
already successfully reporting quality measures meaningful to their
practice via one of these pathways will need to evaluate the options
available to them and choose which available reporting mechanism and
measures they believe will work best for their practice.
Comment: A few commenters recommended that more quality measures be
made available for reporting via claims or EHRs noting that there were
more quality measures available for reporting by registry compared with
EHRs or claims. The commenters stated that this will push clinicians to
sign up with registries, undercuts fully using EHRs, and only services
the interests of organizations who manage registries.
Response: We appreciate the commenters' concern and are working
with measure developers to develop more measures that are
electronically based. We refer the commenter to the Measure Development
Plan for more information https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf.
Additionally, in section II.E.9.(b). of this final rule with
comment period, we have expanded health IT vendors' opportunities by
allowing health IT vendors to submit data on measures, activities, or
objectives for any of the following MIPS performance categories: (i)
Quality; (ii) improvement activities; or (iii) advancing care
information. In addition, the health IT vendor submitting data on
behalf of a MIPS eligible clinician or group would be required to
obtain data from the MIPS eligible clinician's certified EHR
technology. However, the health IT vendor would be able to submit the
same information the qualified registry is able to. Therefore, we do
not believe there is a disparity between health IT vendors and
qualified registry's quality data submission capabilities.
Comment: Other commenters stated that the use of CEHRT in all areas
of the MIPS program should be required rather than just encouraged. The
commenters stated that the use of CEHRT is required for participation
in the Meaningful Use EHR Incentive Programs, is vitally important for
ensuring successful interoperability, and is already part of the
definition of a Meaningful EHR User for MIPS.
Response: We do not believe it is appropriate to require CEHRT in
all areas of the MIPS program as many MIPS eligible clinicians may not
have had past experience relevant to the performance categories and use
of EHR technology because they were not previously eligible to
participate in the Medicare EHR Incentive Program. The restructuring of
program requirements described in this final rule with comment period
are geared toward increasing participation and EHR adoption. We believe
this is the most effective way to encourage the adoption of CEHRT, and
introduce new MIPS eligible clinicians to the use of certified EHR
technology and health IT overall. As discussed in section
II.E.6.a.(2)(f) of this final rule with comment period, we are
promoting the use of CEHRT by awarding bonus points in the quality
scoring section for measures gathered and reported electronically via
the QCDR, qualified registry, CMS Web Interface, or CEHRT submission
mechanisms. By promoting use of CEHRT through various submission
mechanisms, we believe MIPS eligible clinicians have flexibility in
implementing electronic reporting in a manner which best suits their
practice.
Comment: One commenter requested information on how non-Medicare
payers would route claims data to CMS for purposes of considering cost
performance category data.
Response: All measures used under the cost performance category
would be derived from Medicare administrative claims data submitted for
billing on Part B claims by MIPS eligible clinicians and as a result,
participation would not require use of a separate data submission
mechanism. Please note that the cost performance category is being
reweighted to zero for the transition year of MIPS. Refer to section
II.E.5.e. of this final rule with comment for more information on the
cost performance category.
Comment: Other commenters requested clarification on the difference
between ``claims'' and ``administrative claims'' as reporting methods,
citing slides 24 and 39 of the May 10th Quality Payment Program
presentation, available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Quality-Payment-Program.html. The commenters were
confused because ``claims'' was listed as a method of reporting but it
was stated that ``administrative claims'' will not require submission.
Response: The ``claims'' submission mechanism refers to those
quality measures as described in section II.E.5.b.(6). of this final
rule with comment period. The claims submission mechanism requires MIPS
eligible clinicians to append certain billing codes to denominator
eligible claims to indicate to us the required quality action or
exclusion occurred. Conversely, the administrative claims submission
mechanism refers to those measures described in section II.E.5.b. for
the quality performance category and section II.E.5.e. for the cost
performance category of this final rule with comment period.
Administrative claims submissions require no separate data submission
to CMS. Rather, we calculate these measures based on data available
from MIPS eligible clinicians' billings on Medicare Part B claims.
[[Page 77091]]
Comment: Other commenters stated that some of the measures and
activities, such as the CAHPS for MIPS survey, were dependent on third
party intermediaries, over which practices have little control. The
commenters recommended that CMS reduce requirements that are outside of
the practice's control.
Response: We believe the MIPS program has a broad span of measures
and activities from which to choose. There are many measures and
activities that are not dependent on a third party intermediary. We
encourage MIPS eligible clinicians to report the measures and
activities that are most meaningful to their practice.
Comment: Another commenter stated that if CMS were to require
vendors to have the capability to submit data for all performance
categories, a vendor would need adequate time to implement any required
changes going forward, would need CMS to produce implementation guides
for 2017 reporting as soon as possible with the capability to ask CMS
clarifying questions, and would need a testing tool no later than the
3rd quarter. Several commenters did not support the proposed
requirement that vendors have the capability to submit data for all
MIPS performance categories. The commenters stated many product
developers and product or service vendors have developed solutions
tailored to specific areas of healthcare quality and performance
improvement. The commenters stated that given the breadth of the
proposed MIPS requirements, CMS should not require health IT companies
to have the capability to submit information for all four MIPS
performance categories because this task may be outside of their
organizational and client priorities. Another commenter stated that
while they appreciate CMS' attempts to reduce administrative burden
they have a concern that third party entities will not be able to
implement the necessary changes to support reporting on all performance
categories in the transition year. In addition, the commenter was
concerned that the additional cost of creating this functionality will
be passed on to MIPS eligible clinicians in the form of higher fees for
using those products and services. The commenter urged CMS to work with
health IT developers, vendors, and other data intermediaries to ensure
that data products and services evolve as CMS's policies evolve and to
ensure adequate advanced notice of upcoming changes so that MIPS
eligible clinicians will not be penalized for failing to report data
the third party intermediary's technology was not updated to collect.
Response: We would like to explain that we are not finalizing a
requirement that a third party intermediary submitting data on behalf
of a MIPS eligible clinician or group must become qualified to submit
data for multiple MIPS performance categories, nor are we finalizing a
certification requirement for submission of data. We are instead
finalizing specific requirements for QCDRs related to quality data
submission, and for a health IT vendor or other authorized third party
intermediary that is submitting data for any or all of the MIPS
performance categories on behalf of an MIPS eligible clinician or group
must meet the form and manner requirements for each submission method.
We direct readers to section II.E.9.b. of this final rule with comment
period for further discussion of health IT vendor and other authorized
third party intermediaries. We direct readers to section II.E.9.a. of
this final rule with comment period for further discussion of
submission requirements for QCDRs.
Comment: Another commenter stated that the CMS Web Interface should
have fewer down times during the first quarter submission period,
following the performance period, to compensate for MIPS eligible
clinicians' need to submit their files.
Response: We intend to make every effort to keep the CMS Web
Interface from having down times during the first quarter submission
period. In some instances, down times are required to account for
necessary system maintenance within CMS. When these down times do
occur, we make every effort to ensure that the down times do not occur
near final submission deadlines and to notify all groups and impacted
parties well in advance so they can account for these down times during
the data submission period.
Comment: One commenter encouraged utilizing EHRs and claims to
collect quality measure data whenever possible.
Response: We agree with utilizing EHR whenever possible and
encourage the use of EHR to collect data whenever possible. However, we
intend to reduce the number of claims-based measures that in future
years, but we note that many MIPS eligible clinicians still submit
these types of measures. We believe claims-based measures are a
necessary option to minimize reporting burden for MIPS eligible
clinicians. We intend to work with MIPS eligible clinicians and other
stakeholders to continue improving available measures and reporting
methods for MIPS.
Comment: One commenter expressed concern that multi-specialty
groups reporting through a QCDR would face challenges if multiple
specialties wanted to report non-MIPS measures. This commenter believed
this would require reporting via two different submission mechanisms.
Response: QCDRs are able to report both non-MIPS measures and MIPS
measures. They are provided a great deal of flexibility and should be
able to report for multiple specialties.
Comment: Another commenter requested clarity regarding the
submission mechanisms for a group. The commenter sought flexibility to
use the most appropriate submission mechanism for each of the
performance categories. Another commenter suggested continuing 2017
reporting via CMS Web Interface for groups. The commenter stated that
at a minimum, the CMS Web Interface reporting and EHR direct reporting
should be maintained.
Response: Please refer to the final submission mechanisms in Tables
3 and 4 of this final rule with comment period for the available
submission mechanisms for all MIPS eligible clinicians.
Comment: Another commenter expressed concern that CMS proposed to
allow measures which are available to report via EHR technology to be
reported via a QCDR, because the commenter believed this would result
in unnecessary burden as practices would be required to seek another
data submission vendor beyond their EHR vendor. The commenter
recommended that CMS allow MIPS eligible clinicians to report quality
measures and improvement activities using their certified EHR
technology.
Response: MIPS eligible clinicians will have the flexibility to
submit their quality measures and improvement activities using their
certified EHR technology. The health IT vendor would need to meet the
requirements as described in section II.E.9.b. of this final rule with
comment period to offer this flexibility to their clients.
Comment: A few commenters agreed with the proposal to allow third
party submission entities, such as QCDRs and qualified registries, to
submit data for the performance categories of quality, advancing care
information, and improvement activities. The commenters believed that
allowing MIPS eligible clinicians to use a single, third party data
submission method reduces the administrative burden on MIPS eligible
clinicians, facilitates consolidation and standardization of data from
disparate EHRs and other systems, and enables the third parties to
provide timely, actionable feedback to
[[Page 77092]]
MIPS eligible clinicians on opportunities for improvement in quality
and value. Other commenters agreed with the proposals that encourage
the use of QCDRs because QCDRs are able to quickly implement new
quality measures to assist MIPS eligible clinicians with accurately
measuring, reporting, and taking action on data most meaningful to
their practices. Another commenter stated that vendors and QCDRs should
have the capability to submit data for all MIPS performance categories.
The commenter believed that working through a single vendor is the only
way to provide a full picture of overall performance.
Response: We thank the commenters for their support.
Comment: A few commenters expressed support for the Quality Payment
Program's approach of streamlining the PQRS, VM, and EHR Incentive
Program into MIPS and encouraged CMS to continue to allow existing data
reporting tools to report MIPS quality data. Hospitals have already
made significant investments in existing reporting tools. Other
commenters supported the option to use a single reporting mechanism
under MIPS. The commenters considered this a positive development, and
one that would be attractive to many groups and hospitals. Some
commenters noted that CMS offers significant flexibility across
performance category reporting options, and supported the proposal to
accept data submissions from multiple mechanisms. The commenters urged
CMS to retain this flexibility in future years and to hold QCDR and
other vendors accountable for offering MIPS reporting capabilities
across all performance categories. One commenter was pleased that CMS
is allowing flexibility in measure selection and reporting via any
reporting mechanism, and report as an individual or a group. Another
commenter supported the proposal allowing MIPS eligible clinicians who
are in a group to report on MIPS either as part of the group or
individually. This flexibility would allow low performing groups the
opportunity to reap the benefits of their higher performance. Other
commenters were very supportive of the use of bonus points in the
quality performance category to encourage the use of CEHRT and
electronic reporting of CQMs.
Response: We thank the commenters for their support on the various
approaches. We would like to explain that groups must report either
entirely as a group or entirely as individuals; groups may not have
only some individual reporting. Groups must decide to report as a group
across all four performance categories.
Comment: Another commenter recommended that CMS adopt a clear,
straightforward, and prospective process for practices to determine
whether a MIPS performance category applies to their particular
specialty and subspecialty.
Response: We agree with the commenter and are working to establish
educational tools and materials that will clearly indicate to MIPS
eligible clinicians their requirements based on their specialty or
practice type.
Comment: One commenter urged CMS to offer a quality and cost
performance category measure reporting option in which hospital-based
MIPS eligible clinicians can use the hospital's measure performance
under CMS hospital quality programs for purposes of MIPS.
Response: We appreciate the feedback and will take it into
consideration for future rulemaking. We also note that in the Appendix
in Table C of this final rule with comment period we have created a
specialty-specific measure set for hospitalists.
Comment: Another commenter recommended that CMS and HRSA
collaborate to develop a data submission mechanism that would allow
MIPS eligible clinicians practicing in FQHCs to submit quality data one
time for both MIPS and Uniform Data System (UDS).
Response: We intend to address this option in the future through
separate notice-and-comment rulemaking.
Comment: Some commenters supported the proposed data submission
mechanisms and the proposal that MIPS eligible clinicians and groups
must use the same mechanism to report for a given performance category
with the exception of those reporting the CAHPS for MIPS survey.
Response: We thank the commenters for their support.
Comment: Other commenters agreed with the proposal to maintain a
manual attestation portal option for some of the performance
categories. The commenters believed that this option provided MIPS
eligible clinicians with an option of consolidating and submitting data
on their own, which for some may reduce their overall cost to
participate. The commenters recommended that this option remain in
place for the future, but that if CMS decided to remove it, they
provide EHR vendors at least 18 months' notice to develop and deploy
data submission mechanisms.
Response: We appreciate the support and will take the feedback into
consideration in the future.
Comment: Another commenter encouraged CMS to ensure that the
reporting requirements for MIPS are aligned with each of the American
Board of Medical Specialties (ABMS) Member Board's requirements for
Maintenance of Certification, particularly activities required to
fulfill Part IV: Improvement in Medical Practice.
Response: We align our quality efforts where possible. We intend to
continue to receive input from stakeholders, including ABMS, in the
future.
Comment: One commenter suggested that CMS ensure that the MIPS
reporting process is simple to understand, conducive to automated
reporting and clinically relevant.
Response: We believe we have made the reporting process as flexible
and simple as possible for the MIPS program at this time. We have
provided several data submission mechanisms, activities, and measures
for MIPS eligible clinicians to choose from. We intend to continue to
work to improve the program in the future as we gain experience under
the Quality Payment Program.
Comment: Another commenter was appreciative that CMS outlined a
data validation and auditing process in the proposed rule. The
commenter requested more details about implementation, including CMS'
timeline for providing performance reports to MIPS eligible clinicians.
Response: We thank the commenters for their support. We refer
readers to section II.E.8.e. for information on data validation and
section II.E.8.a. for information on performance feedback of this final
rule with comment period.
Comment: A few commenters urged CMS to integrate patient and family
caregiver perspectives as part of Quality Payment Program development.
The commenters noted that value and quality are often perceived through
``effectiveness'' and ``cost'' whereas the patient typically
prioritizes outcomes beyond clinical measures.
Response: We agree that the patient and family caregiver
perspective is important, but note that we would expect patients and
caregivers to prioritize successful health outcomes. We are finalizing
the policy that the CAHPS for MIPS survey would count as a patient
experience measure which is a type of high priority measure. In
addition, a MIPS eligible clinician may be awarded points under the
improvement activities performance category as the CAHPS for MIPS
survey is included in the Patient Safety and Practice Assessment
subcategory.
[[Page 77093]]
Comment: One commenter expressed concern that no measures exist
that are useful to MIPS eligible clinicians working in multiple
settings with diverse patient populations.
Response: We believe the MIPS program has a broad span of measures
and activities from which to choose. There are many measures and
activities that are applicable to multiple treatment facility types and
diverse patient populations. We encourage MIPS eligible clinicians to
report the measures and activities that are most meaningful to their
practice.
Comment: One commenter stated that CMS should clarify the reporting
options for nephrologists who practice in multiple settings. The
commenter urged CMS to provide illustrative examples of options for
nephrologists based on actual sample clinical practices.
Response: The final data submission options for all MIPS eligible
clinicians are outlined in this final rule with comment period in
Tables 3 and 4. We intend to provide further subregulatory guidance and
training opportunities for all MIPS eligible clinicians in the future.
In addition, the MIPS eligible clinician may reach out to the Quality
Payment Program Service Center with any questions.
Comment: Other commenters recommended that CMS not amend the
technical specifications for eCQMs until MIPS eligible clinicians are
required to transition to 2015 Edition CEHRT to report data for MIPS.
In addition, the commenters requested that CMS maintain the eMeasure
versions issued with the EHR Incentive Program Stage 2 final rule until
that transition point. The commenters noted that by delaying any
changes to eCQM measures until 2018, CMS will give the health IT
industry and MIPS eligible clinicians the necessary time to adapt to
new reporting demands and respond appropriately to new specifications.
Response: We understand the concerns of needing necessary time to
adapt to new reporting requirements. Therefore, we did not make major
amendments to the technical standards for eCQMs. We have updated
measure specification for various eCQMs to align with current clinical
guidelines. However, this alignment should not impact technical
standards and certification requirements. We plan to update the EHR
community to allow necessary time for implementers to adapt any new
standards required to report eCQMs in the future.
Comment: One commenter recommended that technologies such as the
CMS Web Interface be available for submission of all data, not just the
quality performance category.
Response: We appreciate the feedback and note that we are expanding
the ability of the CMS Web Interface to be used for submissions on
improvement activities, advancing care information, and quality
performance categories.
Comment: Another commenter stated that the avenue for reporting
different measures requires careful consideration because there are
appropriate avenues of reporting depending upon different measure
types. The commenter stated that this should be taken into
consideration during measure development.
Response: We appreciate the feedback and will take this suggestion
into consideration in the future.
Comment: One commenter supported allowing groups to utilize a
CMS[hyphen]approved survey vendor for CAHPS for MIPS survey data
collection in conjunction with another data submission mechanism.
Another commenter proposed expanding the survey option in the future to
include a CMS[hyphen]approved survey vendor for CAHPS for MIPS survey
data collection for MIPS eligible clinicians reporting individually.
Response: We would like to note that when a MIPS eligible clinician
utilizes the CAHPS for MIPS survey they must also utilize another data
submission mechanism in conjunction with it. We will take the
suggestion of expanding the survey option to individuals in the future.
Comment: One commenter believed that CMS could simplify MIPS
reporting by streamlining the number of submission methods and focusing
on the options that are most appropriate for each performance category.
The commenter recommended the following options: (1) Quality: EHR
Direct, QCDR, Qualified Registry, CMS Web Interface, remove Claims; (2)
Cost: Claims; (3) Improvement Activities: Attestation, Claims, EHR
Direct, QCDR, qualified registry, and CMS Web Interface; (4) Advancing
care information: Attestation, EHR Direct, remove QCDR, remove
qualified registry, and remove CMS Web Interface.
Response: We appreciate the feedback as we are striving to balance
simplicity with flexibility. We believe that by having numerous data
submission mechanisms available for selection it reduces burden to MIPS
eligible clinicians. The data submission options for all MIPS eligible
clinicians are outlined in this final rule with comment period in
Tables 3 and 4.
Comment: Some commenters opposed the lack of transparency of the
claims-based quality and cost performance category measures. The
commenters recommended that CMS make the claims-based attribution of
patients and diagnoses fully transparent to MIPS eligible clinicians
and beneficiaries. They suggested CMS modify them so they accurately
reflect each MIPS eligible clinician's contribution to quality and
resource utilization.
Response: We appreciate the feedback and will take the suggestions
into consideration in the future. We would like to note that
information regarding claims-based quality and cost performance
category measures can be found in the Appendix of this final rule with
comment period under Table A through Table G under the ``data
submission method'' tab. In addition, claims-based quality measures
information may be found at QualityPaymentProgram.cms.gov.
Comment: Another commenter recommended that CMS consider allowing
MIPS eligible clinicians to report across multiple QCDRs because
allowing MIPS eligible clinicians to report through multiple QCDRs
would permit the specificity of reporting required for diverse
specialties, but without increasing the IT integration burden on MIPS
eligible clinicians who might already be reporting through these
registries.
Response: Many QCDRs charge their participants for collecting and
reporting data. Not only might this increase the cost to MIPS eligible
clinicians, but it would make the calculation of the quality score that
much more cumbersome and prone to error. Errors that could occur
include incorrect submission of TIN or NPI information, incomplete data
for one or more measures, etc. We note, however, that MIPS eligible
clinicians do have the flexibility to submit data using different
submission mechanisms across the different performance categories. For
example, one QCDR could report the advancing care information
performance category for a particular MIPS eligible clinician, and that
MIPS eligible clinician could use another QCDR to report the quality
performance category.
Comment: One commenter requested that CMS clearly state the
reporting requirements for each reporting mechanism for quality. The
commenter noted that MIPS eligible clinicians who elect to submit four
eCQMs will submit that data through a QCDR, qualified registry, or EHR
with the QRDA standard that is certified, and then be restricted on
their ability to use the attestation mechanism for the remaining two
quality measures if they elect to submit non-eCQMs that do not require
[[Page 77094]]
certification. The commenter agreed that not all submitted measures
need to be eCQMs, but believed CMS needed to provide greater clarity on
handling such a scenario and wanted CMS to consider the submission
mechanism's ability to submit data using a single standard.
Response: The quality data submission criteria is described in
section II.E.5.a.(2) of this final rule with comment period. We would
like to explain that attestation is not a submission mechanism allowed
for the quality performance category, rather only for the improvement
activities and advancing care information performance categories.
Additionally, we are finalizing our policy that MIPS eligible
clinicians would need to submit data for a given performance category
only one submission mechanism. We refer readers to section II.E.5.a.(2)
of this final rule with comment period where we discuss our approach
for the rare situations where a MIPS eligible clinician submits data
for a performance category via multiple submission mechanisms (for
example, submits data for the quality performance category through a
registry and QCDR), and how we score those MIPS eligible clinicians. We
further note that in that section we are seeking comment for further
consideration on different approaches for addressing this scenario.
Comment: Some commenters agreed with the proposal of using
submission methods already available in the current PQRS program
because this allows QCDRs to focus on the creation of measures and
adapting to final MIPS rule rather than on the submission process
itself.
Response: We appreciate the commenters' support.
Comment: Several commenters noted they support the CMS goals of
patient-centered health care, and the aim of the MIPS program for
evidence-based and outcome-driven quality performance reporting. These
commenters appreciated that the flexibility allowed in the MIPS
program, including the variety of reporting options, is intended to
meet the needs of the wide variety of MIPS eligible clinicians. The
commenters believed, however, that the variety of reporting options can
easily create confusion due to the increased number of choices and
methods. Such confusion will be challenging in general, but could be
especially problematic for 2017, given the short time to prepare. One
commenter suggested that technical requirements for reporting options
should be incorporated into CEHRT, and not added through subregulatory
guidance. Another commenter stated that there are too many reporting
options, and the number of options should be reduced.
Response: We appreciate the commenters' support. We have provided
several data submission mechanisms to allow flexibility for the MIPS
eligible clinician. It is important to note that substantive aspects of
technical requirements for reporting options incorporated into CEHRT
have been addressed in section II.E.g. of this final rule with comment
period. However, we intend to issue subregulatory guidance regarding
further details on the form and manner of EHR submission.
Comment: One commenter recommended CMS allow each specialty group
within a multi-specialty practice to report its own group data file.
The commenter suggested that if this cannot be done under a single TIN,
then CMS should explicitly encourage multi-specialty practices that
wish to report specialty-specific measure sets and improvement
activities at the group level to register each specialty group under a
different TIN for identification purposes. The commenter recognized
that there may be operational challenges to implementing this
recommendation and is willing to work with CMS and its vendors to
develop the framework for the efficient collection and calculation of
multiple data files for a single MIPS performance category from a
group.
Response: We appreciate the commenters' recommendation and will
take it into consideration in future rulemaking. We refer readers to
section II.E.1.e. of this final rule with comment period for more
information on groups.
After consideration of the comments on our proposals regarding the
MIPS data submission mechanisms, we are modifying the data submission
mechanisms at Sec. 414.1325. We will not be finalizing the data
submission mechanism of administrative claims for the improvement
activities performance category, as it is not technically feasible at
this time. All other data submission mechanisms will be finalized as
proposed. Specifically, we are finalizing at Sec. 414.1325(a) that
MIPS eligible clinicians and groups must submit measures, objectives,
and activities for the quality, improvement activities, and advancing
care information performance categories.
Refer to Tables 3 and 4 of this final rule with comment period for
the finalized data submission mechanisms. Table 3 contains a summary of
the data submission mechanisms for individual MIPS eligible clinicians
that we are finalizing at Sec. 414.1325(b) and (e). Table 4 contains a
summary of the data submission mechanisms for groups that are not
reporting through an APM that we are finalizing at Sec. 414.1325(c)
and Sec. 414.1325(e). Furthermore, we are finalizing our proposal at
Sec. 414.1325(d) that except for groups that elect to report the CAHPS
for MIPS survey, MIPS eligible clinicians and groups may elect to
submit information via multiple mechanisms; however, they must use the
same identifier for all performance categories and they may only use
one submission mechanism per performance category. In addition, we are
finalizing at Sec. 414.1305 the following definitions as proposed: (1)
Attestation means a secure mechanism, specified by CMS, with respect to
a particular performance period, whereby a MIPS eligible clinician or
group may submit the required data for the advancing care information
or the improvement activities performance categories of MIPS in a
manner specified by CMS; (2) CMS-approved survey vendor means a survey
vendor that is approved by CMS for a particular performance period to
administer the CAHPS for MIPS survey and to transmit survey measures
data to CMS; and (3) CMS Web Interface means a web product developed by
CMS that is used by groups that have elected to utilize the CMS Web
Interface to submit data on the MIPS measures and activities.
Table 3--Data Submission Mechanisms for MIPS Eligible Clinicians
Reporting Individually as TIN/NPI
------------------------------------------------------------------------
Performance category/ submission Individual reporting data
combinations accepted submission mechanisms
------------------------------------------------------------------------
Quality................................ Claims.
QCDR.
Qualified registry.
EHR.
Cost................................... Administrative claims (no
submission required).
Advancing Care Information............. Attestation.
QCDR.
Qualified registry.
EHR.
Improvement Activities................. Attestation.
QCDR.
Qualified registry.
EHR.
------------------------------------------------------------------------
[[Page 77095]]
TABLE 4--Data Submission Mechanisms for Groups
------------------------------------------------------------------------
Performance category/ submission Group reporting data
combinations accepted submission mechanisms
------------------------------------------------------------------------
Quality................................ QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25
or more).
CMS-approved survey vendor for
CAHPS for MIPS (must be
reported in conjunction with
another data submission
mechanism.).
and
Administrative claims (For all-
cause hospital readmission
measure--no submission
required).
Cost................................... Administrative claims (no
submission required).
Advancing Care Information............. Attestation.
QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25
or more).
Improvement Activities................. Attestation.
QCDR.
Qualified registry.
EHR.
CMS Web Interface (groups of 25
or more).
------------------------------------------------------------------------
(3) Submission Deadlines
For the submission mechanisms described in the proposed rule (81 FR
28181), we proposed a submission deadline whereby all associated data
for all performance categories must be submitted. In establishing the
submission deadlines, we took into account multiple considerations,
including the type of submission mechanism, the MIPS performance
period, and stakeholder input and our experiences under the submission
deadlines for the PQRS, VM, and Medicare EHR Incentive Programs.
Historically, under the PQRS, VM, or Medicare EHR Incentive
Programs, the submission of data occurred after the close of the
performance periods. Our experience has shown that allowing for the
submission of data after the close of the performance period provides
either the MIPS eligible clinician or the third party intermediary time
to ensure the data they submit to us is valid, accurate and has
undergone necessary data quality checks. Stakeholders have also stated
that they would appreciate the ability to submit data to us on a more
frequent basis so they can receive feedback more frequently throughout
the performance period. We also note that, as described in the proposed
rule (81 FR 28179), the MIPS performance period for payments adjusted
in 2019 is CY 2017 (January 1 through December 31).
Based on the factors noted, we proposed at Sec. 414.1325(e) that
the data submission deadline for the qualified registry, QCDR, EHR, and
attestation submission mechanisms would be March 31 following the close
of the performance period. We anticipate that the submission period
would begin January 2 following the close of the performance period.
For example, for the first MIPS performance period, the data submission
period would occur from January 2, 2018, through March 31, 2018. We
note that this submission period is the same time frame as what is
currently available to EPs and group practices under PQRS. We were
interested in receiving feedback on whether it is advantageous to
either (1) have a shorter time frame following the close of the
performance period, or (2) have a submission period that would occur
throughout the performance period, such as bi-annual or quarterly
submissions; and (3) whether January 1 should also be included in the
submission period. We requested comments on these items.
We further proposed that for the Medicare Part B claims submission
mechanism, the submission deadline would occur during the performance
period with claims required to be processed no later than 90 days
following the close of the performance period. Lastly, for the CMS Web
Interface submission mechanism, the submission deadline will occur
during an 8-week period following the close of the performance period
that will begin no earlier than January 1 and end no later than March
31. For example, the CMS Web Interface submission period could span an
8-week timeframe beginning January 16 and ending March 13. The specific
deadline during this timeframe will be published on the CMS Web site.
We requested comments on these proposals.
The following is a summary of the comments we received on our
proposals regarding MIPS submission deadlines.
Comment: One commenter requested clarity on the first reporting
deadline.
Response: The first proposed submission deadline for the qualified
registry, QCDR, EHR, and attestation submission mechanisms is from
January 2nd, 2018 through March 31st, 2018. For the CMS Web Interface
submission mechanism, the first proposed submission deadline will occur
during an 8-week period following the close of the performance period
that will begin no earlier than January 1 and end no later than March
31 (for example, January 16 through March 13, 2018). The specific
deadline during this timeframe will be published on the CMS Web site.
Comment: Several commenters supported the data submission deadline
of March 31 of the year following the performance period. The
commenters also suggested that more frequent submissions could be
useful but only if data are easy to submit. Another commenter
recommended that CMS not make more frequent data submission a
requirement, but allow for reporters to submit data on a more frequent
basis if they so choose. The commenter saw benefit to more frequent
data submission, but stated that there are some concerns CMS should
consider. For example, they noted that monthly submission would not
work well with the advancing care information performance category
requirement that requires reporting patients' choosing to view their
patient portal, as patients would have to visit the portal during the
month after their appointment in order for the portal visit to count
towards the measure.
Response: We appreciate the commenters' support. We intend to
explore the capability of more frequent data submission to the MIPS
program. As a starting point we intend to allow for optional, early
data submissions for the qualified registry, QCDR, EHR, and attestation
submission mechanisms. Specifically, we would allow submissions to
begin earlier than January 2, 2018 for those individual MIPS eligible
clinicians and groups who would like to optionally submit data early to
us, if technically feasible. If it is not technically feasible to allow
the submission period to begin prior to January 2 following the close
of the performance period, the submission period will occur from
January 2 through March 31 following the close of the performance
period. Please note that the final deadline for these submission
mechanisms will remain March 31, 2018. Additional details related to
the technical feasibility of early data submissions will be made
available at QualityPaymentProgram.cms.gov.
Comment: Some commenters were concerned about timelines for the
PQRS, VM, and Medicare EHR Incentive Program for EPs. The commenters
believed it was unfair to expect MIPS eligible clinicians and groups to
complete full calendar year reporting in 2016 for EHR Incentive Program
and PQRS and then completely switch to a new program while still
completing attestations for 2016 programs.
[[Page 77096]]
Response: We understand the commenters' concerns and therefore have
modified our proposed policy to allow more flexibility and time for
MIPS eligible clinicians to transition to CEHRT and familiarize
themselves with MIPS requirements. As discussed in section II.E.5.b.(3)
of this final rule with comment period, we are finalizing the policy
that MIPS clinicians will only need to report for a minimum of a
continuous 90-day period within CY 2017, for the majority of the
submission mechanisms for all data in a given performance category and
submission mechanism, to qualify for an upward adjustment for the
transition year.
Comment: Another commenter called for the elimination of reporting
electronically to data registries unless the registries have been
empirically demonstrated to improve care and reduce cost in practice.
Response: We appreciate the comment regarding the function of a
qualified registry to improve care and reduce cost in practice. We
agree that registries are a tool to drive value in clinical practice.
For MIPS, a qualified registry or QCDR is required to provide
attestation statements from the MIPS eligible clinicians during the
data submission period that all of the data (quality measures,
improvement activities, and advancing care information measures and
activities, if applicable) and results are accurate and complete.
Comment: Another commenter believed that limiting performance
category data submission to one mechanism per performance category will
limit innovation and disincentivize reporting the highest quality data
available. The commenter believed that if MIPS eligible clinicians
could report some of the required quality measures through a QCDR, they
should be allowed to do so. Other commenters supported CMS' proposal to
retain reporting mechanisms available in PQRS but opposed the proposal
to allow only one submission mechanism per performance category,
especially for the quality performance category. The commenters stated
that some MIPS eligible clinicians may need to report through multiple
mechanisms, such as MIPS eligible clinicians reporting a proposed
specialty-specific measure set containing measures requiring differing
submission mechanisms. A few commenters requested that CMS reconsider
its proposal that all quality measures used by CMS must be submitted
from the same reporting method because there are limits in the
applicable reporting methods for certain measures, with some specialty-
specific measure sets having very few EHR-enabled measures. These
commenters believed the MIPS eligible clinicians should be able to use
multiple reporting options. Another commenter urged CMS to limit the
number of measure data reporting options so hospitals, health systems,
and national stewards can accurately assess and benchmark performance
over time. Another commenter recommended that for at least the first 3
to 5 years of the program, the submission mechanism flexibility to
report measures using a variety of mechanisms remain in place.
Response: MIPS eligible clinicians may choose whichever data
submission mechanisms works best for their practice. We have provided
many data submission options to allow the utmost flexibility for the
MIPS eligible clinician. We believe the proposal to allow multiple
mechanisms, while restricting the number of mechanisms per performance
category, offers flexibility without adding undue complexity. We
discuss our policies related to multiple methods of reporting within a
performance category in section II.E.5.a. of this final rule with
comment period. We would also like to note that in section II.E.6.a. of
this final rule with comment period we are seeking comment for further
consideration on additional flexibilities that should be offered for
MIPS eligible clinicians in this situation.
In addition, we do not believe that allowing these various
submission mechanisms impacts the ability to create reliable and
accurate measure benchmarks. We discuss our policies related to measure
benchmarks in more detail in section II.E.6.e. of this final rule with
comment period.
Comment: One commenter recommended that CMS require Medicare Part B
claims to be submitted, rather than processed, within 90 days of the
close of the applicable performance period, as MIPS eligible clinicians
have no control over how quickly claims are processed and should not be
held responsible for delays. Another commenter recommended that the
submission time period be extended to 12 weeks, as more data will be
required to be submitted than historically during that time period.
Other commenters expressed concern with CMS' proposed submission
deadline and requested a minimum 90-day submission period as MIPS
eligible clinicians employed by health systems may not have access to
December data until February and cumulative data even later. The
commenters further believed that submission periods should be
standardized regardless of submission mechanism and suggest a
submission period from January 1 through March 31. A few commenters
agreed with the proposed 90-day submission period policy for submittal
of data via the claims mechanism and noted that the prior deadline was
often too challenging for MIPS eligible clinicians to meet.
Response: In establishing the submission deadlines, we took into
account multiple considerations, including the type of submission
mechanism, the MIPS performance period, and stakeholder input and our
experiences under the submission deadlines for the PQRS, VM, and
Medicare EHR Incentive Program. Our experience has shown that allowing
for the submission of data after the close of the performance period
provides either the MIPS eligible clinician or the third party
intermediary time to ensure the data they submit to us is valid,
accurate and has undergone necessary data quality checks. We do note,
however, that as indicated previously in this final rule with comment
period, we would allow submissions to begin earlier than January 2,
2018 for those individual MIPS eligible clinicians and groups who would
like to optionally submit data early to us, provided that it is
technically feasible. If it is not technically feasible, individual
MIPS eligible clinicians and groups will still be able to submit data
during the normal data submission period. Please note that the final
deadline for all submission mechanisms will remain at March 31, 2018.
However, for the Medicare Part B claims submission mechanism, we
believe the best approach for the data submission deadline is to
require Medicare Part B claims to be processed no later than 60 days
following the close of the performance period.
Comment: Another commenter stated that despite MIPS data submission
via the CMS Web Interface, the process of data verification prior to
submission is still manual and labor-intensive. The commenter
encouraged CMS to explore methods for allowing test submissions
(whether throughout the performance period or during the submission
window) to uncover any possible submission errors; this would provide
an opportunity for CMS to give feedback to MIPS eligible clinicians and
third party intermediaries in advance of the submission deadline.
Response: We appreciate the feedback and would like to note as
indicated previously in this final rule with comment period, we would
allow submissions to begin earlier than January 2, 2018 for those
individual MIPS eligible clinicians and groups who would like to
optionally submit data
[[Page 77097]]
early to us, if technically feasible. If it is not technically feasible
to allow the submission period to begin prior to January 2 following
the close of the performance period, the submission period will occur
from January 2 through March 31 following the close of the performance
period. Please note that the final deadline for these submission
mechanisms will remain March 31, 2018.
Comment: We received comments on our request for feedback on
whether it is advantageous to either (1) have a shorter time frame
following the close of the performance period, or (2) have a submission
period that would occur throughout the performance period, such as bi-
annual or quarterly submissions; and (3) whether January 1 should also
be included in the submission period. A few commenters opposed shorter
reporting timeframes for MIPS eligible clinicians using the CMS Web
Interface or other reporting mechanisms. The commenters recommended, in
general, quarterly or semi-annual data submission periods with a
minimum report of at least once annually, and subsequently a quarterly
report by CMS detailing MIPS eligible clinicians' progress. The
commenters recommended a real-time tool for MIPS eligible clinicians to
be able to track their MIPS progress. Another commenter stated that
MIPS reporting deadlines should be no earlier than 2 months following
the notification of QP status. Other commenters stated that bi-annual
and quarterly submission period requirements would be advantageous only
if CMS intended to provide timely MIPS eligible clinician feedback on a
quarterly basis. They stated that if quarterly reporting were to be
required, EHR vendors would need to have upfront notice regarding
changes in measures in order to prepare. One commenter expressed that
clinicians must know the standards by which they will be measured in
advance of the performance period and require 3 months after the
performance period to scrub data before submitting. The commenter
stated that quarterly data submission would be too burdensome.
Response: We appreciate the feedback and agree with the commenter
that we want to strike the right balance on allowing for more frequent
submissions which would allow us to issue more frequent performance
feedback, while ensuring that the process that is developed is not
overly burdensome. Therefore, as indicated previously in this final
rule with comment period, we would allow submissions to begin earlier
than January 2, 2018 for those individual MIPS eligible clinicians and
groups who would like to optionally submit data early to us, if
technically feasible. If it is not technically feasible to allow the
submission period to begin prior to January 2 following the close of
the performance period, the submission period will occur from January 2
through March 31 following the close of the performance period. Please
note that the final deadline for these submission mechanisms will
remain March 31, 2018.
After consideration of the comments received on the proposals
regarding MIPS submission deadlines, we are finalizing the submission
deadlines as proposed with one modification. Specifically, we are
finalizing at Sec. 414.1325(f) the data submission deadline for the
qualified registry, QCDR, EHR, and attestation submission mechanisms as
March 31 following the close of the performance period. The submission
period will begin prior to January 2 following the close of the
performance period, if technically feasible. For example, for the first
MIPS performance period, the data submission period will occur prior to
January 2, 2018, through March 31, 2018, if technically feasible. If it
is not technically feasible to allow the submission period to begin
prior to January 2 following the close of the performance period, the
submission period will occur from January 2 through March 31 following
the close of the performance period. In any case, the final deadline
will remain March 31, 2018.
We further finalize at Sec. 414.1325(f)(2) that for the Medicare
Part B claims submission mechanism, the submission deadline must be on
claims with dates of service during the performance period that must be
processed no later than 60 days following the close of the performance
period. Lastly, for the CMS Web Interface submission mechanism, we are
finalizing at Sec. 414.1325(f)(3) the submission deadline must be an
8-week period following the close of the performance period that will
begin no earlier than January 1, and end no later than March 31. For
example, the CMS Web Interface submission period could span an 8-week
timeframe beginning January 16 and ending March 13. The specific
deadline during this timeframe will be published on the CMS Web site.
b. Quality Performance Category
(1) Background
(a) General Overview and Strategy
The MIPS program is one piece of the broader health care
infrastructure needed to reform the health care system and improve
health care quality, efficiency, and patient safety for all Americans.
We seek to balance the sometimes competing considerations of the health
system and minimize burdens on health care providers given the short
timeframe available under the MACRA for implementation. Ultimately,
MIPS should, in concert with other provisions of the Act, support
health care that is patient-centered, evidence-based, prevention-
oriented, outcome driven, efficient, and equitable.
Under MIPS, clinicians are incentivized to engage in improvement
measures and activities that have a proven impact on patient health and
safety and are relevant to their patient population. We envision a
future state where MIPS eligible clinicians will be seamlessly using
their certified health IT to leverage advanced clinical quality
measurement to manage patient populations with the least amount of
workflow disruption and reporting burden. Ensuring clinicians are held
accountable for patients' transitions across the continuum of care is
imperative. For example, when a patient is discharged from an emergency
department (ED) to a primary care physician office, health care
providers on both sides of the transition should have a shared
incentive for a seamless transition. Clinicians may also be working
with a QCDR to abstract and report quality measures to CMS and
commercial payers and to track patients longitudinally over time for
quality improvement.
Ideally, clinicians in the MIPS program will have accountability
for quality and cost measures that are related to one another and will
be engaged in improvement activities that directly help them improve in
both specialty-specific clinical practice and more holistic areas (for
example, patient experience, prevention, population health). The cost
performance category will provide clinicians with information needed to
delivery efficient, effective, and high-value care. Finally, MIPS
eligible clinicians will be using CEHRT and other tools which leverage
interoperable standards for data capture, usage, and exchange in order
to facilitate and enhance patient and family engagement, care
coordination among diverse care team members, and continuous learning
and rapid-cycle improvement leveraging advanced quality measurement and
safety initiatives.
One of our goals in the MIPS program is to use a patient-centered
approach to program development that will lead to better, smarter, and
healthier care. Part of that goal includes meaningful
[[Page 77098]]
measurement which we hope to achieve through:
Measuring performance on measures that are relevant and
meaningful.
Maximizing the benefits of CEHRT.
Flexible scoring that recognizes all of a MIPS eligible
clinician's efforts above a minimum level of effort and rewards
performance that goes above and beyond the norm.
Measures that are built around real clinical workflows and
data captured in the course of patient care activities.
Measures and scoring that can discern meaningful
differences in performance in each performance category and
collectively between low and high performers.
(b) The MACRA Requirements
Sections 1848(q)(1)(A)(i) and (ii) of the Act require the Secretary
to develop a methodology for assessing the total performance of each
MIPS eligible clinician according to performance standards and, using
that methodology, to provide for a final score for each MIPS eligible
clinician. Section 1848(q)(2)(A)(i) of the Act requires us to use the
quality performance category in determining each MIPS eligible
clinician's final score, and section 1848(q)(2)(B)(i) of the Act
describes the measures and activities that must be specified under the
quality performance category.
The statute does not specify the number of quality measures on
which a MIPS eligible clinician must report, nor does it specify the
amount or type of information that a MIPS eligible clinician must
report on each quality measure. However, section 1848(q)(2)(C)(i) of
the Act requires the Secretary, as feasible, to emphasize the
application of outcomes-based measures.
Sections 1848(q)(1)(E) of the Act requires the Secretary to
encourage the use of QCDRs, and section 1848(q)(5)(B)(ii)(I) of the Act
requires the Secretary to encourage the use of CEHRT and QCDRs for
reporting measures under the quality performance category under the
final score methodology, but the statute does not limit the Secretary's
discretion to establish other reporting mechanisms.
Section 1848(q)(2)(C)(iv) of the Act generally requires the
Secretary to give consideration to the circumstances of non-patient
facing MIPS eligible clinicians and allows the Secretary, to the extent
feasible and appropriate, to apply alternative measures or activities
to such clinicians.
(c) Relationship to the PQRS and VM
Previously, the PQRS, which is a pay-for-reporting program, defined
requirements for satisfactory reporting and satisfactory participation
to earn payment incentives or to avoid a PQRS payment adjustment EPs
could choose from a number of reporting mechanisms and options. Based
on the reporting option, the EP had to report on a certain number of
measures for a certain portion of their patients. In addition, the
measures had to span a set number of National Quality Strategy (NQS)
domains, information related to the NQS can be found at http://www.ahrq.gov/workingforquality/about.htm. The VM built its policies off
the PQRS criteria for avoiding the PQRS payment adjustment. Groups that
did not meet the criteria as a group to avoid the PQRS payment
adjustment or groups that did not have at least 50 percent of the EPs
that did not meet the criteria as individuals to avoid the PQRS payment
adjustment automatically received the maximum negative adjustment
established under the VM and are not measured on their quality
performance.
MIPS, in contrast to PQRS, is not a pay-for-reporting program, and
we proposed that it would not have a ``satisfactory reporting''
requirement. However, to develop an appropriate methodology for scoring
the quality performance category, we believe that MIPS needs to define
the expected data submission criteria and that the measures need to
meet a data completeness standard. In the proposed rule (81 FR 28184),
we proposed the minimum data submission criteria and data completeness
standard for the MIPS quality performance category for the submission
mechanisms that were discussed in the proposed rule (81 FR 28181), as
well as benchmarks against which eligible clinicians' performance would
be assessed. The scoring methodology discussed in the proposed rule (81
FR 28220) would adjust the quality performance category scores based on
whether or not an individual MIPS eligible clinician or group met these
criteria and how their performance compared against the benchmarks.
In the MIPS and APMs RFI, we requested feedback on numerous
provisions related to data submission criteria including: How many
measures should be required? Should we maintain the policy that
measures cover a specified number of NQS domains? How do we apply the
quality performance category to MIPS eligible clinicians that are in
specialties that may not have enough measures to meet our defined
criteria? Several themes emerged from the comments. Commenters
expressed concern that the general PQRS satisfactory reporting
requirement to report nine measures across three NQS domains is too
high and forces eligible clinicians to report measures that are not
relevant to their practices. The commenters requested a more meaningful
set of requirements that focused on patient care, with some expressing
the opinion that NQS domain requirements are arbitrary and make
reporting more difficult. Some commenters requested that we align
measures across payers and consider using core measure sets. Other
commenters expressed the need for flexibility and different reporting
options for different types of practices.
In response to the MIPS and APMs RFI comments, and based on our
desire to simplify the MIPS reporting system and make the measurement
more meaningful, we proposed MIPS quality criteria that focus on
measures that are important to beneficiaries and maintain some of the
flexibility from PQRS, while addressing several of the issues that
concerned commenters.
To encourage meaningful measurement, we proposed to allow
individual MIPS eligible clinicians and groups the flexibility to
determine the most meaningful measures and reporting mechanisms for
their practice.
To simplify the reporting criteria, we are aligning the
submission criteria for several of the reporting mechanisms.
To reduce administrative burden and focus on measures that
matter, we are lowering the expected number of the measures for several
of the reporting mechanisms, yet are still requiring that certain types
of measures be reported.
To create alignment with other payers and reduce burden on
MIPS eligible clinicians, we are incorporating measures that align with
other national payers.
To create a more comprehensive picture of the practice
performance, we also proposed to use all-payer data where possible.
As beneficiary health is always our top priority, we proposed
criteria to continue encouraging the reporting of certain measures such
as outcome, appropriate use, patient safety, efficiency, care
coordination, or patient experience measures. However, we proposed to
remove the requirement for measures to span across multiple domains of
the NQS. We continue to believe the NQS domains to be extremely
important and we encourage MIPS eligible clinicians to continue to
strive to provide care that focuses on: effective clinical care,
communication,
[[Page 77099]]
efficiency and cost reduction, person and caregiver-centered experience
and outcomes, community and population health, and patient safety.
While we will not require that a certain number of measures must span
multiple domains, we encourage MIPS eligible clinicians to select
measures that cross multiple domains. In addition, we believe the MIPS
program overall, with the focus on cost, improvement activities, and
advancing care information performance categories, will naturally cover
many elements in the NQS.
(2) Contribution to the Final Score
For the 2019 MIPS adjustment year, the quality performance category
will account for 50 percent of the final score, subject to the
Secretary's authority to assign different scoring weights under section
1848(q)(5)(F) of the Act. Section 1848(q)(2)(E)(i)(I)(aa) of the Act
states the quality performance category will account for 30 percent of
the final score for MIPS. However, section 1848(q)(2)(E)(i)(I)(bb) of
the Act stipulates that for the first and second years for which MIPS
applies to payments, the percentage of the final score applicable for
the quality performance category will be increased so that the total
percentage points of the increase equals the total number of percentage
points by which the percentage applied for the cost performance
category is less than 30 percent. Section 1848(q)(2)(E)(i)(II)(bb) of
the Act requires that, for the transition year for which MIPS applies
to payments, not more than 10 percent of the of final score shall be
based on performance to the cost performance category. Furthermore,
section 1848(q)(2)(E)(i)(II)(bb) of the Act states that, for the second
year for which MIPS applies to payments, not more than 15 percent of
the final score shall be based on performance to the cost performance
category. We proposed at Sec. 414.1330 for payment years 2019 and
2020, 50 percent and 45 percent, respectively, of the MIPS final score
would be based on performance on the quality performance category. For
the third and future years, 30 percent of the MIPS final score would be
based on performance on the quality performance category.
Section 1848(q)(5)(B)(i) of the Act requires the Secretary to treat
any MIPS eligible clinician who fails to report on a required measure
or activity as achieving the lowest potential score applicable to the
measure or activity. Specifically, under our proposed scoring policies,
a MIPS eligible clinician or group that reports on all required
measures and activities could potentially obtain the highest score
possible within the performance category, presuming they performed well
on the measures and activities they reported. A MIPS eligible clinician
or group who does not meet the reporting threshold would receive a zero
score for the unreported items in the category (in accordance with
section 1848(q)(5)(B)(i) of the Act). The MIPS eligible clinician or
group could still obtain a relatively good score by performing very
well on the remaining items, but a zero score would prevent the MIPS
eligible clinician or group from obtaining the highest possible score.
The following is summary of the comments we received regarding our
general strategy and the quality performance category contribution to
the final score.
Comment: Numerous commenters supported the focus on quality in the
proposed rule and our proposal that, for payment year 2019, 50 percent
of the final score would be based on performance on quality measures.
Response: We thank the commenters for their support.
Comment: Other commenters were concerned with the quality
performance category's final score weights decreasing to 30 percent for
payment years 2021 and beyond, as some eligible clinicians will not be
eligible to participate in MIPS and receive a MIPS adjustment until
payment year 2021. The commenters believed this would be a disadvantage
with the cost performance category final score weight increasing. The
commenters noted that increasing penalties under MIPS would also place
such clinicians in an unfair position. The commenters requested that
CMS make appropriate considerations for such MIPS eligible clinicians.
Response: We appreciate the concerns raised that MIPS eligible
clinicians who are not initially eligible to participate in MIPS and
receive MIPS adjustments until payment year 2021 might have a different
starting point than those MIPS eligible clinicians who begin
participating in CY 2017. We note that those MIPS eligible clinicians
who are not initially eligible to participate in MIPS and receive MIPS
adjustments, do have the option to volunteer to report. By volunteering
to report, these eligible clinicians will gain experience with the MIPS
scoring system prior to being required to do so. We will, however, take
the commenter's recommendation into consideration for future
rulemaking.
Comment: Another commenter requested that when the time comes to
include rehabilitation therapists in MIPS program, they be granted the
same stepped-down percentage of scoring for quality and stepped-up
percentage of scoring for cost that are in place for those MIPS
eligible clinicians participating in MIPS program in the first 2 years.
Such an approach would give those MIPS eligible clinicians the same
time and consideration doctors of medicine or osteopathy, doctors of
dental surgery or dental medicine, physician assistants, nurse
practitioners, clinical nurse specialists, and certified registered
nurse anesthetists will receive during their transition to MIPS
program.
Response: We would like to explain that in the first 2 years of the
MIPS program, the quality weight will be higher and the cost weight
will be lower. In addition, we note that those MIPS eligible clinicians
who are not initially eligible to participate in MIPS in 2017 for the
2019 MIPS payment year, do have the option to voluntarily report. By
volunteering to report, these eligible clinicians will gain experience
with the MIPS scoring system prior to being required to do so. We thank
the commenter for their feedback and will take their comments into
consideration in future rulemaking.
Comment: One commenter supported CMS' proposal to incentivize MIPS
eligible clinicians to use CEHRT for end-to-end electronic reporting.
Response: We thank the commenter for their support.
Comment: One commenter stated they were concerned about how
different evaluation criteria have been weighed in the MIPS program.
They believed there was an arbitrary nature and bias in the weighting
for MIPS which they stated cannot be corrected through a change in
weighting. The commenter provided an example of the scoring system
including bonus points, which they believed results in an inaccurate
view of real outcomes.
Response: We do not believe that the evaluation criteria we have
developed and proposed for MIPS are arbitrary or biased. Moreover, as
we explained in the proposed rule (81 FR 28255), bonus points are
intended to recognize quality measurement priorities. We believe that
recognition is necessary to focus quality improvement efforts on
specific CMS goals.
Comment: Another commenter suggested for the quality performance
measures that CMS adopt standards and mapping tools by ensuring that
eCQM calculations are accurate. In addition, the commenter stated CMS
should adopt standards to ensure different EHRs are accurately and
uniformly capturing eCQMs. Another commenter recommended that CMS
ensure that the eCQMs in the quality performance category align with
measures used by
[[Page 77100]]
other payers and accrediting and certification programs (for example,
NCQA), noting that if the specifications do not align, the commenter
believed that shared data will not help streamline the reporting
processes.
Response: We thank the commenters and agree that adopting standards
to accurately and uniformly capture eCQMs is essential. We currently
use the Health Level Seven (HL7) standard Health Quality Measures
Format (HQMF) for electronically documenting eCQM content as well as
the Quality Data Model (QDM) for measure logic. We will continue to
ensure industry standards are used and refined in order best capture
eCQM data.
Comment: One commenter recommended that CMS consider merging the
quality and cost performance categories as a ratio of quality and cost.
Response: We do not believe we have the statutory authority to
merge the quality and cost performance categories. MACRA specified the
four performance categories we are required to incorporate into the
MIPS program.
After consideration of the comments received regarding our general
strategy and the quality performance category contribution to the final
score and the additional factors described in section II.E.5.b. of this
final rule with comment period, we are not finalizing this policy as
proposed. Rather, as discussed in section II.E.5.e. of this final rule
with comment period, the cost performance category will account for 0
percent of the final score in 2019, 10 percent of the final score in
2020, and 30 percent of the final score in 2021 and future MIPS payment
years, as required by statute. In accordance with section
1848(q)(2)(E)(i)(I)(bb) of the Act, we are redistributing the final
score weight from cost performance category to the quality performance
category. Therefore, we are finalizing at Sec. 414.1330(b) for MIPS
payment years 2019 and 2020, 60 percent and 50 percent, respectively,
of the MIPS final score will be based on performance on the quality
performance category. For the third and future years, 30 percent of the
MIPS final score will be based on performance on the quality
performance category.
(3) Quality Data Submission Criteria
(a) Submission Criteria
The following are the proposed criteria for the various proposed
MIPS data submission mechanisms described in the proposed rule (81 FR
28181) for the quality performance category.
(i) Submission Criteria for Quality Measures Excluding CMS Web
Interface and CAHPS for MIPS
We proposed at Sec. 414.1335 that individual MIPS eligible
clinicians submitting data via claims and individual MIPS eligible
clinicians and groups submitting via all mechanisms (excluding CMS Web
Interface, and for CAHPS for MIPS survey, CMS-approved survey vendors)
would be required to meet the following submission criteria. We
proposed that for the applicable 12-month performance period, the MIPS
eligible clinician or group would report at least six measures
including one cross-cutting measure (if patient-facing) found in Table
C of the Appendix in this final rule with comment period and including
at least one outcome measure. If an applicable outcome measure is not
available, we proposed that the MIPS eligible clinician or group would
be required to report one other high priority measure (appropriate use,
patient safety, efficiency, patient experience, and care coordination
measures) in lieu of an outcome measure. If fewer than six measures
apply to the individual MIPS eligible clinician or group, then we
proposed the MIPS eligible clinician or group would be required to
report on each measure that is applicable.
MIPS eligible clinicians and groups would select their measures
from either the list of all MIPS measures in Table A of the Appendix in
this final rule with comment period, or a set of specialty-specific
measure set in Table E of the Appendix in this final rule with comment
period. We noted that some specialty-specific measure sets include
measures grouped by subspecialty; in these cases, the measure set is
defined at the subspecialty level.
We designed the specialty-specific measure sets to address feedback
we have received in the past that the quality measure selection process
can be confusing. A common complaint about PQRS was that EPs were asked
to review close to 300 measures to find applicable measures for their
specialty. The specialty measure sets in Table E of the Appendix in
this final rule with comment period, are the same measures that are
within Table A of the Appendix in this final rule with comment period,
however these are sorted consistent with the American Board of Medical
Specialties (ABMS) specialties. Please note that these specialty-
specific measure sets are not all inclusive of every specialty or
subspecialty. We requested comments on the measures proposed under each
of the specialty-specific measure sets. Specifically, we solicited
comments on whether or not the measures proposed for inclusion in the
specialty-specific measure sets are appropriate for the designated
specialty or subspecialty and whether there are additional proposed
measures that should be included in a particular specialty-specific
measure set.
Furthermore, in the proposed rule we noted that there were some
special scenarios for those MIPS eligible clinicians who selected their
measures from a specialty-specific measure set at either the specialty
or subspecialty level (Table E of the Appendix in this final rule with
comment period). We provided the following example in the proposed
rule, where some of the specialty-specific measure sets have fewer than
six measures, in these instances MIPS eligible clinicians would report
on all of the available measures including an outcome measure or, if an
outcome measure is unavailable, report another high priority measure
(appropriate use, patient safety, efficiency, patient experience, and
care coordination measures), within the set and a cross-cutting measure
if they are a patient-facing MIPS eligible clinician. To illustrate, at
the subspecialty-level the electrophysiology cardiac specialist
specialty-specific measure set only has three measures within the set,
all of which are outcome measures. MIPS eligible clinicians and groups
reporting on the electrophysiology cardiac specialist specialty-
specific measure set would report on all three measures and since these
MIPS eligible clinicians are patient-facing they must also report on a
cross-cutting measure which is defined in Table C of the Appendix in
this final rule with comment period. In other scenarios, the specialty-
specific measure sets may have six or more measures, and in these
instances MIPS eligible clinicians would report on at least six
measures including at least one cross-cutting measure and at least one
outcome measure or, if an outcome measure is unavailable, report
another high priority measure (appropriate use, patient safety,
efficiency, patient experience, and care coordination measure).
Specifically, the general surgery specialty-specific measure set has
eight measures within the set, including four outcome measures, three
other high priority measures and one process measure. MIPS eligible
clinicians and groups reporting on the general surgery specialty-
specific measure set would either have the option to report on all
measures within the set or could select six measures from the set and
since these MIPS eligible clinicians are patient-facing one of their
[[Page 77101]]
six measures must be a cross-cutting measure which is defined in Table
C of the Appendix in this final rule with comment period.
As noted above, the submission criteria is provided for each
specialty-specific measure set, or in the measure set defined at the
subspecialty level, if applicable. Regardless of the number of measures
that are contained in a specialty-specific measure set, MIPS eligible
clinicians reporting on a measure set would be required to report at
least one cross-cutting measure and either at least one outcome measure
or, if no outcome measures are available in that specialty-specific
measure set, report another high priority measure. We proposed that
MIPS eligible clinicians or groups that report on a specialty-specific
measure set that includes more than six measures can report on as many
measures as they wish as long as they meet the minimum requirement to
report at least six measures, including one cross-cutting measure and
one outcome measure, or if an outcome measure is not available another
high priority measure. We solicited comment on our proposal to allow
reporting of specialty-specific measure sets to meet the submission
criteria for the quality performance category, including whether it is
appropriate to allow reporting of a measure set at the subspecialty
level to meet such criteria, since reporting at the subspecialty level
would require reporting on fewer measures.
Alternatively, we solicited comment on whether we should only
consider reporting up to six measures at the higher overall specialty
level to satisfy the submission criteria. We noted that our proposal to
allow reporting of specialty-specific measure sets at the subspecialty
level was intended to address the fact that very specialized clinicians
who may be represented by our subspecialty categories may only have one
or two applicable measures. Further, we note that we will continue to
work with specialty societies and other measure developers to increase
the availability of applicable measures for specialists across the
board.
We proposed to define a high priority measure at Sec. 414.1305 as
an outcome, appropriate use, patient safety, efficiency, patient
experience, or care coordination quality measures. These measures are
identified in Table A of the Appendix in this final rule with comment
period. We further note that measure types listed as an ``intermediate
outcome'' are considered outcome measures for the purposes of scoring
(see 81 FR 28247).
As an alternative to the above proposals, we also considered
requiring individual MIPS eligible clinicians submitting via claims and
individual MIPS eligible clinicians and groups submitting via all
mechanisms (excluding the CMS Web Interface and, for CAHPS for MIPS
survey, CMS-approved survey vendors) to meet the following submission
criteria. For the applicable 12-month performance period, the MIPS
eligible clinician or group would report at least six measures
including one cross-cutting measure (if patient-facing) found in Table
C of the Appendix in this final rule with comment period and one high
priority measure (outcome, appropriate use, patient safety, efficiency,
patient experience, and care coordination measures). If fewer than six
measures apply to the individual MIPS eligible clinician or group, then
the MIPS eligible clinician or group must report on each measure that
is applicable. MIPS eligible clinicians and groups will have to select
their measures from either the list of all MIPS Measures in Table A of
the Appendix in this final rule with comment period or a set of
specialty-specific measure set in Table E of the Appendix in this final
rule with comment period.
As discussed in the proposed rule (81 FR 28173), MIPS eligible
clinicians who are non-patient facing MIPS eligible clinicians would
not be required to report any cross-cutting measures. For further
details on non-patient facing MIPS eligible clinician discussions, we
refer readers to section II.E.1.b. of this final rule with comment
period.
In addition, in the proposed rule (81 FR 28187) we discussed our
intention to develop a validation process to review and validate a MIPS
eligible clinician's or group's ability to report on at least six
quality measures, or a specialty-specific measure set, with a
sufficient sample size, including at least one cross-cutting measure
(if the MIPS eligible clinician is patient-facing) and either an
outcome measure if one is available or another high priority measure.
If a MIPS eligible clinician or group had the ability to report on the
minimum required measures with sufficient sample size and elects to
report on fewer than the minimum required measures, then, as described
in the proposed scoring algorithm (81 FR 28254), the missing measures
would be scored with a zero performance score.
Our proposal is a decrease from the 2016 PQRS requirement to report
at least nine measures. In addition, as previously noted, we proposed
to no longer require reporting across multiple NQS domains. We believed
these proposals were the best approach for the quality performance
category because they decrease the MIPS eligible clinician's reporting
burden while focusing on more meaningful types of measures.
We also note that we believe that outcome measures are more
valuable than clinical process measures and are instrumental to
improving the quality of care patients receive. To keep the emphasis on
such measures in the statute, we plan to increase the requirements for
reporting outcome measures over the next several years through future
rulemaking, as more outcome measures become available. For example, we
may increase the required number of outcome measures to two or three.
We also believe that appropriate use, patient experience, safety, and
care coordination measures are more relevant than clinical process
measures for improving care of patients. Through future rulemaking, we
plan to increase the requirements for reporting on these types of
measures over time.
In consideration of which MIPS measures to identify as reasonably
focused on appropriate use, we have selected measures which focus on
minimizing overuse of services, treatments, or the related ancillary
testing that may promote overuse of services and treatments. We have
also included select measures of underuse of specific treatments or
services that either (1) reflected overuse of alternative treatments
and services that were are not evidence-based or supported by clinical
guidelines; or (2) where the intent of the measure reflected overuse of
alternative treatments and services that were not evidence-based or
supported by clinical guidelines. We realize there are differing
opinions on what constitutes appropriate use. Therefore, we solicited
comments on what specific measures of over or under use should be
included as appropriate use measures.
We plan to incorporate new measures as they become available and
will give the public the opportunity to comment on these provisions
through future notice and comment rulemaking. Under the Improving
Medicare Post-Acute Transformation (IMPACT) Act of 2014, the Office of
ASPE has been conducting studies on the issue of risk adjustment for
sociodemographic factors on quality measures and cost, as well as other
strategies for including SDS evaluation in CMS programs. We will
closely examine the ASPE studies when they are available and
incorporate findings as feasible and appropriate through future
rulemaking. We look forward to working with stakeholders in this
process. In
[[Page 77102]]
addition, we solicited comments on ways to minimize potential gaming,
for example, requiring MIPS eligible clinicians to report only on
measures for which they have a sufficient sample size, to address
concerns that MIPS eligible clinicians may solely report on measures
that do not have a sufficient sample size to decrease the overall
weight on their quality score. More information on the way we proposed
to score MIPS eligible clinicians in this scenario is discussed in the
proposed rule (81 FR 28187). We also solicited comment on whether these
proposals sufficiently encourage clinicians and measure developers to
move away from clinical process measures and towards outcome measures
and measures that reflect other NQS domains. We requested comments on
these proposals.
The following is summary of the comments we received regarding our
proposal on submission criteria for quality measures excluding CMS Web
Interface and CAHPS for MIPS.
Comment: Many commenters expressed support for lowering the
reporting threshold from nine to six quality measures, including one
cross-cutting and one outcome measure, and no longer requiring that
MIPS eligible clinicians report on measures that span three NQS
domains.
Response: We thank the commenters for their support.
Comment: Another commenter appreciated the decreased requirement
relative to PQRS of reporting on six quality measures for MIPS;
however, the commenter was disappointed about our proposal to maintain
an absolute minimum number of measures that MIPS eligible clinicians
are required to report. The commenter believed that the current quality
measures list is insufficient to cover all practice types. The
commenter stated that the challenge of participating would only be
exacerbated by imposition of a minimum number of measures. The
commenter appreciated the lack of penalty if a MIPS eligible clinician
is unable to report on the minimum requirement when they do not have
applicable measures. A few commenters noted that emergency clinicians
who report via claims cannot report on six measures. They stated that
it was not clear from proposal whether these MIPS eligible clinicians
would still be able to qualify for the full potential score available
under the scoring methodology. Another commenter requested CMS provide
special consideration be given to clinicians practicing at urgent care
centers, including reducing the required number of quality measures to
report on from six to four.
Response: We would like to note that MIPS eligible clinicians with
fewer than six applicable measures are not required to report six
measures, and must only report those measures that are applicable.
While claims-based reporting is one submission mechanism available,
emergency clinicians also have the option to use the other submission
mechanisms available to satisfy the requirements. We further note that
we have revised the emergency medicine specialty-specific measure set
whereby the set now includes 17 measures with 11 of them reportable via
claims. Emergency medicine clinicians will be able to report measures
to earn the full potential score.
Comment: Some commenters disagreed with our proposed measure
threshold of six measures, and recommended maintaining the PQRS
threshold of reported measures at nine. These commenters were concerned
that lowering the threshold of reported measures (from nine to six)
sends the wrong signal about the importance of quality measures within
MIPS. The commenters believed MIPS eligible clinicians might pick and
choose measures that they perform well on, providing a less
comprehensive picture of quality of care. Instead, the commenters
stated CMS should establish mandatory core sets of measures by
specialty/subspecialty groups to signal areas where MIPS eligible
clinicians should focus their attention and increase comparability
across MIPS eligible clinicians. Other commenters believed a core set
of measures would create unequal performance by groups of different
sizes and specialties, allowing single specialty groups to report only
measures specific to their practice. The commenters recommended that
CMS establish benchmarks for a set of core quality measures.
Conversely, other commenters disagreed with our proposed measure
threshold of six measures, and recommended that the measure threshold
be lowered. Recommendations ranged from four measures, three measures
or one to two measures. These commenters indicated that a reduced
threshold would allow MIPS eligible clinicians to choose a few measures
that will have a high impact on care improvements. Additionally,
commenters were concerned that the threshold of six may burden
practices that are struggling to find relevant measures and jeopardize
their ability to achieve the maximum number of points under the quality
performance category. The commenters stated that fewer required
measures will reduce administrative burden, better reflect the
conditions and realities of medical practice, allow MIPS eligible
clinicians time to focus on quality improvement, and lead to more
accurate measurement and a better snapshot of quality. Some commenters
requested that CMS, the Department of Health (DOH), The Joint
Commission (TJC), and Det Norske Veritas (DNV) join forces to focus on
meaningful improvement.
Response: We do not believe the thresholds for quality measurement
should be lowered further. In any quality measurement program, we must
balance the data collection burden that we must impose on MIPS eligible
clinicians with the resulting quality performance data that we will
receive. We believe that without sufficiently robust performance data,
we cannot accurately measure quality performance. Therefore, we believe
that we have appropriately struck a balance between requiring
sufficient quality measure data from clinicians and ensuring robust
quality measurement at this time. We want to emphasize that we are
committed to working with stakeholders to improve our quality programs
including MIPS. An integral part of these programs are quality measures
that reflect the scope and variety of the many types of clinical
practice. It is important that we offer enough quality measures that
assess the various practice types and that clinicians report sufficient
measures to allow a reasonable comparison of their quality performance.
We do note that for the initial performance period under the MIPS
many flexibilities have been implemented, including a modified scoring
approach which ensures that MIPS eligible clinicians who prefer to only
submit data on one or two measures can avoid a negative MIPS
adjustment. Furthermore, our modified scoring approach incentivizes
high performers who have a robust data set available. We refer readers
to section II.E.6. of this final rule with comment period for more
details on the scoring approach.
Comment: Another commenter referenced our proposal, which stated
that ``if fewer than six measures apply to the individual MIPS eligible
clinician or group, then the MIPS eligible clinician or group would be
required to report on each measure that is applicable,'' and mentioned
that this statement seemed to provide no penalty. The commenter
requested clarification on this language to ensure that groups would
not be penalized for submitting
[[Page 77103]]
fewer than six measures. Another commenter requested clarification on
how CMS proposes to define ``applicable.'' One commenter suggested that
MIPS eligible clinicians should have the opportunity to pre-certify
with CMS that fewer than six measures are available to them prior to
the beginning of the performance period.
Response: While we expect this to occur in only rare circumstances,
we would like to confirm the commenter's understanding. If fewer than
six measures apply to the MIPS eligible clinician or group, the MIPS
eligible clinician or group would be required to report on each
applicable measure. Additionally, groups that report on a specialty-
specific measure set that has fewer than six measures would only need
to report the measures within that specialty-specific measure set.
Generally, we define ``applicable'' to mean measures relevant to a
particular MIPS eligible clinician's services or care rendered. The
MIPS eligible clinician should be able to review the measure
specifications to see if their services fall into the denominator of
the measure. For example, if a MIPS eligible clinician who is an
interventional radiologist decides to submit data via a specialty-
specific measure set by selecting the interventional radiology
specialty-specific measure sets, this MIPS eligible clinician would not
have six measures applicable to them. Therefore, the MIPS eligible
clinician would submit data on all of the measures defined within the
specialty-specific measure set. MIPS eligible clinicians who do not
have six individual measures available to them should select their
appropriate specialty-specific measure set, because that pre-defines
which measures are applicable to their specialty and provides certain
assurances to them. For the majority of MIPS eligible clinicians
choosing the specialty-specific measure sets provides a means to select
applicable measures and, if the set includes less than 6 measures, this
also assures that there is no need to report any additional measures.
Furthermore, we will apply a clinical relation test to the quality data
submissions to determine if the MIPS eligible clinician could have
reported other measures. For more information on the clinical relation
test, see section II.E.6.a.(2) of this final rule with comment period,
where we discuss our validation process. Lastly, we are working to
provide additional toolkits and educational materials to MIPS eligible
clinicians prior to the performance period that will ease the burden on
identification of which measures are applicable to MIPS eligible
clinicians. If the MIPS eligible clinician required assistance, they
may contact the Quality Payment Program Service Center.
Comment: Another commenter requested that CMS add a requirement
that MIPS-eligible clinicians report at least six measures, including
one cross-cutting measure (if patient-facing), at least one outcome
measure, and at least one high-priority measure. The commenter was
concerned that high-priority measures may not be reported if they are a
substitute for outcome measures.
Response: We agree with the commenter that we want to maintain an
emphasis on both outcome and high priority measures within the MIPS. We
will take this comment into consideration for future rulemaking.
Comment: Numerous commenters supported the proposal to encourage
reporting of outcome measures over clinical process measures. One
commenter noted that significant work remains to ensure measurement
efforts across the health care system are focused on the most important
quality issues, while other commenters recommended that future quality
metrics emphasize patient care and health outcomes.
Response: We thank the commenters for their support. We intend to
finalize our proposal that one of the six measures a MIPS eligible
clinicians must report on is an outcome measure.
Comment: One commenter recommended that patient experience and
patient satisfaction should not be categorized as quality metrics since
these measures and surveys include factors outside the control of the
clinician. The commenter stated that patient satisfaction, while
important, does not always correlate with better clinical outcomes and
may even conflict with clinically indicated treatments. In addition,
another commenter expressed concern that the emphasis on patient
opinions and their care experiences drives up cost.
Response: We do believe it is important to assess patient
experience of care, as it represents items such as communication and
family engagement, which are important factors of the health care
experience and these are measures that are important to patients and
families. While patient experience may not always be directly related
to health outcomes, there is evidence of a correlation between higher
scores on patient experience surveys and better health outcomes. Please
refer to http://www.ahrq.gov/cahps/consumer-reporting/research/index.html for more information on AHRQ studies pertaining to patient
experience survey and better health outcomes.
Comment: A few commenters supported the proposed reduction in
burden in the MIPS quality performance category, but noted that MIPS
eligible clinician specialties lacking validated outcome measures or
``high priority'' measures are likely to be at a disadvantage under
this performance category because the quality performance category
lacks sufficient specialty-specific quality measures. The commenters
recommended that CMS work with specialty societies and measure
development bodies to increase the availability of specialty-specific
quality measure sets. Another commenter supported the reduced number of
quality measures required for reporting, but recommended that specialty
MIPS eligible clinicians not be required to report a cross-cutting
measure. Some commenters supported CMS's proposal to allow the
reporting of specialty and subspecialty specific measure sets to meet
the submission criteria for the quality performance category, even if
it would mean a MIPS eligible clinician or group would report on fewer
than six measures.
Response: We thank the commenters for their feedback. We believe
that all MIPS eligible clinicians regardless of their specialty have a
high priority measure available. Therefore, we intend to finalize that
if a MIPS eligible clinician does not have an outcome measure
available, they are required to report on a high priority measure.
Comment: Several commenters recommended eliminating the proposed
requirement that an outcome measure and a cross-cutting measure be
reported in the quality performance category. One commenter believed
this proposal may disadvantage small or rural practices and posed
challenges for QCDRs. The commenter noted that some approved QCDRs do
not incorporate value codes in their data collection process, and many
specialized QCDRs may not capture the data needed to report cross-
cutting measures. The commenter believed the requirement for reporting
on cross cutting measures also makes the 90 percent reporting threshold
for QCDRs nearly impossible to meet. Another commenter stated that,
until more valid and reliable outcome measures are developed, CMS
should keep flexibility of measures throughout and lift the
requirements that certain types of measures be reported, such as
outcomes-based or cross-cutting measures. Other commenters recommended
that specialty-specific measure sets lacking outcome measures be
clearly marked as such and also
[[Page 77104]]
contain notations as to which measures would qualify as high-priority
alternatives. Several commenters recommended CMS provide bonus points
for these measures rather than require all participants to report on
them, and that CMS not require use of any specific measure types in the
initial years of the program.
Response: We appreciate the comments and have examined the policies
very carefully. We have modified our proposal for the transition year
of MIPS and are finalizing that for the applicable performance period,
the MIPS eligible clinician or group would report at least six measures
including at least one outcome measure. If an applicable outcome
measure is not available, the MIPS eligible clinician or group would be
required to report one other high priority measure (appropriate use,
patient safety, efficiency, patient experience, and care coordination
measures) in lieu of an outcome measure. If fewer than six measures
apply to the individual MIPS eligible clinician or group, then the MIPS
eligible clinician or group would be required to report on each measure
that is applicable. We note that generally, we define ``applicable'' to
mean measures relevant to a particular MIPS eligible clinician's
services or care rendered.
We are not finalizing the requirement that one of the measures must
be a cross-cutting measure. Although we still believe that the concept
of having a common set of measures available to clinicians that they
can draw from is important we understand that not all of these measures
are the most meaningful to clinicians and their scope of practice. We
do strongly recommend however that where appropriate, clinicians
continue to perform and submit data on these measures to CMS. Lastly,
while we recognize that there are limitations in the current set of
available outcome measures, we believe that a strong emphasis on
outcome-based measurement is critical to improving the quality of care.
Due to these limitations in the available outcome measure set, we are
finalizing that MIPS eligible clinician may select another high
priority measure if an outcome is not available.
Comment: A few commenters recommended that CMS provide a ``safe
harbor'' for reporting on new quality measures with innovative
approaches and improvement by allowing entities to register ``test
measures'' which would not be scored on but would count as a subset of
the six quality measures with a participation credit. In addition, the
commenters stated that CMS should provide a transitional period during
the first half of 2017 in which MIPS eligible clinicians can receive
written confirmation from CMS that their intended measures meet the
requirements. The commenter expressed concern that CMS needs to provide
specifications and a scoring methodology for the population health
measures to improve transparency.
Response: As noted in other sections of this final rule with
comment period, we are providing a transitional year for the first
performance period under the MIPS. We also note that commenters
successfully reporting an appropriate specialty-specific measure set
for a sufficient portion of their beneficiary population will have met
all minimum reporting requirements for the quality category. We
appreciate the commenter's feedback and will incorporate their
suggestion as we develop toolkits and educational materials. We refer
the commenter to section II.E.5.b.(6) and II.E.6. of this final rule
with comment period for information on population health measures and
the MIPS scoring methodology respectively.
Comment: Another commenter urged CMS to pursue the following
policies in the quality performance category: The commenter urged CMS
to reconsider its proposal to require reporting on a minimum of six
measures, if six measures apply. Instead, CMS should encourage the use
of non-MIPS measures associated with a QCDR and/or allow MIPS eligible
clinicians to select measures that directly relate to their clinical
specialty and outcomes for their patients; and CMS should carefully
monitor modifications to the cross-cutting measures list and ensure
that at least one cross-cutting measure remains on this list for each
category of MIPS eligible clinicians to allow them to remain compliant
with the proposed requirements. Alternately, CMS could develop an
option similar to the outcomes measures reporting requirement that
would allow the MIPS eligible clinician to report a different type of
measure, such as a high priority measure, if a cross-cutting measure
does not apply.
Response: We thank the commenter for their feedback and will take
these recommendations into consideration for future rulemaking. We
would like to note that there are already a number of outcome and
specialty-specific measure sets available for reporting. In addition,
the cross-cutting measure requirement is not being finalized.
Comment: One commenter recommended that CMS develop a pilot
program/test within the first MIPS implementation year that identifies
a core measure set that allows direct comparison among MIPS eligible
clinician performance where commonly applicable metrics allow for such
a measure set for specific MIPS eligible clinician specialties. The
commenter supported the general flexibility of quality reporting, but
was concerned that the existing proposal may not foster true
comparisons and performance could vary based on the measures selected
to report rather than differences in quality performance. Another
commenter encouraged CMS to identify a strategy to assess the most
appropriate number of measures and distribution of metrics that MIPS
eligible clinicians should be required to report. The commenter
believed these analyses would provide necessary information for CMS to
make evidence-based decisions with regard to changes to the quality
measures reporting requirements to ensure an accurate account of the
quality of care individual patients are receiving.
Response: The majority of the quality measures that are being
included in the MIPS program have already been utilized in PQRS for
many years. In addition, we have created specialty-specific measure
sets that may be utilized by specialist. We do not believe we need a
pilot program as these measures have already been tested. The quality
measures go through a rigorous evaluation process prior to being
accepted in the MIPS program. With respect to the ideal number of
measures that should be required per the commenter's suggestion above,
we believe that our final submission requirements of six measures is
the appropriate number based on our experience under the PQRS, VM and
Medicare EHR Incentive Programs. We will however take the commenter's
suggestion into consideration for future analyses and rulemaking.
Comment: A few commenters were concerned that using self-reported
measures and tying payment to self-reported quality measures will give
MIPS eligible clinicians an incentive to select and report measures on
which they perform well, especially when they have a large number of
measures from which to choose. The commenters were also concerned that
MIPS eligible clinicians are not likely to select certain high priority
measures because of unfavorable results, such as overuse measures (for
example, imaging for low-back pain) or because of the effort required
to collect the measure (for example, the CAHPS for MIPS survey). The
commenters stated self-reporting would tend to produce compressed
ranges for measures that are scored in
[[Page 77105]]
MIPS, which they believed would mean MIPS eligible clinicians would
receive different incentive payments based on very small gradations in
performance.
Other commenters expressed concern that the ability of MIPS
eligible clinicians to select their own measures could result in the
reliance on low-bar measures that do not drive value-based care. The
commenters recommended that CMS encourage MIPS eligible clinicians to
report both an outcome and a high priority measures representative of
their patient populations. Another commenter stated CMS should finalize
requirements that provide more explicit standards around the type and
caliber of measures that MIPS eligible clinicians and groups must
report. The commenter encouraged CMS to utilize variations in weighting
and scoring of measures to incentivize greater reporting on clinical
and patient-reported outcomes measures. The commenter supported the
inclusion of patient-reported outcomes and patient experience measures
in MIPS.
Other commenters recommended re-evaluation of the quality measures
required by MIPS. The commenters stated that under the proposed rule,
MIPS eligible clinicians participating in MIPS would choose six quality
measures to report, one of which must be an outcome measure, and
another a cross-cutting measure. The commenters recognized that CMS
proposed this approach to reduce administrative burden and allow
clinicians the flexibility to choose appropriate measures; however, was
concerned that this approach may not meaningfully advance the quality
of care provided to Medicare beneficiaries. The commenters stated given
the financial incentive, the commenter would expect that MIPS eligible
clinicians will select those measures on which they are already high-
performing and on which they believe they can be at the top of the
curve. Thus, they will focus more effort on the few areas that are
existing strengths, and have limited incentive to drive improvement in
a broad set of areas. The commenter recommended that CMS leverage the
work of the Core Quality Measure Collaborative--which brought together
stakeholders from America's Health Insurance Plans (AHIP), CMS and the
National Quality Forum (NQF), as well as national physician
organizations, employers and consumers--and select core sets of
measures for each specialty to report. The commenters also proposed
bonus points for clinicians who choose to report innovative, outcome-
based measures in addition to the core set.
Response: We agree with the commenters that there are certain
challenges in using self-reported measures rather than a core or common
measure set that all clinicians would be required to submit. We also
appreciate the emphasis placed on outcome measurement. We do however
believe that there are certain challenges in creating a core or common
measure set for clinicians, as compared to other settings, due to the
various practice and specialty types that clinicians may practice
under. However, we have included the measures in the core measure sets
that were developed by the Core Quality Measure Collaborative in the
MIPS measure set and several of the specialty-specific measure sets.
Lastly, we note that as indicated in other sections of this rule the
first performance period of MIPS is a transitional year. We will take
these comments into consideration for future rulemaking and will
continue to monitor whether clinicians select only low-bar measures or
measures on which their performance is already high. We will address
any changes to policies based on these monitoring activities through
future rulemaking.
Comment: A few commenters recommended that CMS remove the
requirement that specialists reporting under the specialty-specific
measure set report a cross-cutting measure because they believed that
the list of cross-cutting measures was not truly applicable to all
specialties. For example, the commenters stated that emergency medicine
MIPS eligible clinicians have only one proposed cross-cutting measure
that is somewhat relevant: PQRS #3 1 7: High Blood Pressure Screening
and Follow-Up. The commenters stated that the measure is problematic
for emergency medicine because follow-up is required for any patient
outside of the ``normal'' range. While the measure does include
exclusion for patients in ``emergent or urgent situations where time is
of the essence and to delay treatment would jeopardize the patient's
health status,'' the commenters noted that a substantial number of ED
patients are inadvertently included in the universe addressed by this
measure, requiring burdensome documentation, follow-up, and even
unnecessary downstream medical care.
Response: We appreciate the comments and have examined the policies
very carefully. As discussed above, we have modified our proposal for
the transition year of MIPS. We are not requiring a cross-cutting
measure but rather are finalizing that for the applicable performance
period, the MIPS eligible clinician or group would report at least six
measures including at least one outcome measure. If an applicable
outcome measure is not available, the MIPS eligible clinician or group
would be required to report one other high priority measure
(appropriate use, patient safety, efficiency, patient experience, and
care coordination measures) in lieu of an outcome measure. If fewer
than six measures apply to the individual MIPS eligible clinician or
group, then the MIPS eligible clinician or group would be required to
report on each measure that is applicable or may report more measures
that are applicable. We note that generally, we define ``applicable''
to mean measures relevant to a particular MIPS eligible clinician's
services or care rendered.
Comment: Some commenters urged CMS to take advantage of promoting a
new set of cross-cutting quality measures--including measures generally
applicable to patients with rare, chronic, and multiple chronic
conditions--that would incorporate a patient-centered perspective,
adding a critical patient voice to quality measurement.
Response: We appreciate the suggestion and will take into
consideration in the future.
Comment: Other commenters supported the reporting criteria for
cross-cutting measures and outcome measures. The commenters hoped that
CMS would work with specialties that do not fall under the American
Board of Medical Specialties' board certification to develop specialty-
specific measure sets for clinicians such as physical therapists, as
this may help clinicians who are less familiar with the program report
successfully. Additionally, the commenters supported the flexibility of
reporting either the specialty-specific measure set or the six
measures.
Response: We appreciate the commenters' support. We welcome
suggestions for additional specialty-specific measure sets in the
future.
Comment: Another commenter urged CMS to use the recommendations of
the National Academy of Medicine's (NAM) 2015 Vital Signs report,
available at http://www.nationalacademies.org/hmd/Reports/2015/Vital-Signs-Core-Metrics.aspx, to identify the highest priority measures for
development and implementation in the MIPS program.
Response: When we identified high priority measures, we sought
feedback from numerous stakeholders and we encourage commenters to
submit any specific suggestions for future consideration. We will take
this specific suggestion into consideration for future rulemaking.
[[Page 77106]]
Comment: A few commenters recommended that CMS provide
clarification on how proposed specialty-specific measure sets will be
scored, given many have less than the required number of measures and
do not include a required outcome or high priority measure. The
commenters were also concerned that many sets may not be applicable for
sub-specialists, and many specialties do not have a proposed specialty-
specific measure set. In addition, the commenters stated that the
number of applicable measures in a specialty-specific measure set may
be reduced based on the proposed submission mechanism. For example, the
commenters sought clarification as to whether an urologist who reports
the one eCQM in the set (PQRS 50: Urinary Incontinence: Assessment of
Presence or Absence Plan of Care for Urinary Incontinence in Women) is
only accountable for the one eCQM and not accountable for reporting on
an outcome or high priority measure.
Response: We would like to explain that if fewer than six measures
apply to the MIPS eligible clinician or group, the MIPS eligible
clinician or group would be required to report on each applicable
measure or may report more measures that are applicable. We note that
generally, we define ``applicable'' to mean measures relevant to a
particular MIPS eligible clinician's services or care rendered.
Additionally, groups that report on a specialty-specific measure set
that has fewer than six measures would only need to report the measures
within that specialty-specific measure set. Please see section II.E.6.
of this final rule with comment period for more on scoring. Finally, we
would like to explain that if an MIPS eligible clinician or group
reports via a data submission method that only has one applicable
measure reportable via that method, the MIPS eligible clinician or
group is only responsible for the measure that is applicable via that
method. Alternatively, if an MIPS eligible clinician or group reports
via a data submission method that does not have any measures reportable
via that method, the MIPS eligible clinician or group must choose a
data submission method that has one or more applicable measures. Given
the potential for gaming in this situation, we will monitor whether
MIPS eligible clinicians appear to be actively selecting submission
mechanisms and measures sets with few applicable measures; we will
address any changes to policies based on these monitoring activities
through future rulemaking. We will also seek to expand the availability
of measures available for reporting via all submission methods to the
extent feasible.
Comment: Some commenters recommended that CMS include in the
specialty-specific measure sets those cross[hyphen]cutting measures
that are most applicable to the specialty, rather than maintaining a
separate list of cross-cutting measures and requiring MIPS eligible
clinicians to refer to two lists. The commenters recommended that a
geriatric measure set be created that will encourage geriatrician
reporting and measures directly associated with improvements in care
for the elderly.
Response: We agree with the commenter and although we are not
finalizing the requirement that MIPS eligible clinicians must report on
a cross-cutting measure, we do still believe these measures add value.
Therefore, we have incorporated the appropriate cross-cutting measures
into the specialty-specific measure sets located in Table E of the
Appendix in this final rule with comment period.
Comment: Another commenter noted that there may be MIPS eligible
clinicians whose services overlap in one or more specialty areas, and
that flexibility is therefore necessary, yet believed that, in order
for payers and patients to have a clear comparison, the ability to
distinguish clinicians on like metrics is critical. Thus, with regard
to specialty-specific measure sets, the commenter recommended that MIPS
eligible clinicians be required to select a minimum number of quality
measures from within their appropriate specialty-specific measure set.
The commenter recommended that CMS continue to explore specialty-
specific measure sets for additional specialty and subspecialty areas
in order to enhance and refine meaningful comparisons over time.
Response: If a clinician has a specialty set, by submitting all of
the measures in that set (which may be fewer than six), they will
potentially achieve a maximum quality score, depending on their
performance. If the measure set has fewer than six measures, and the
clinician reports all the measures in that set, there is not a
requirement for further reporting. We thank the commenters for the
suggestion and intend to work with the specialty societies to further
develop specialty measure sets, specifically those that would be
applicable for subspecialists.
Comment: Some commenters urged CMS to hold all MIPS eligible
clinician types to the six measure requirement, suggesting that a sub-
specialty could select from the broader specialty list to reach six
measures, or if necessary, report cross-cutting measure to achieve six
measures if they have insufficient specialty-specific measures sets
available to them.
Response: We appreciate the commenters' suggestion and agree that
it is important for clinicians to submit a sufficient number of
measures. However, we are concerned that some subspecialists do not
currently have a sufficient number of applicable measures to reach our
6 measure requirement; we are working with specialty societies to
ensure that all specialists soon have access to a sufficient number of
measures. To assure that these subspecialists report a sufficient
number of measures in the interim period, we are finalizing our
proposal to allow subspecialists to submit a specialty-specific measure
set fully in lieu of meeting the six measure minimum requirement.
Comment: One commenter urged CMS to be more transparent on how
designations used for high priority are determined. The commenter
stated that since bonus points are factored into the determination of a
domain or a measure's priority, it is vital that CMS considered
recommendations from measure stewards and QCDR entities for this
determination.
Response: We define high priority measures as outcome, patient
experience, patient safety, care coordination, cost, and appropriate
use. These measures are designated and identified in rulemaking, based
on their NQF designation or if the measures are not NQF endorsed, based
on their NQS domain designation or measure description as defined by
the measure owners, stewards and clinical experts. We welcome
commenters' feedback on high priority measure determinations in the
future.
Comment: Some commenters stated that measures applicability should
be determined by analyzing the MIPS eligible clinician's claims, not
just their specialty designation.
Response: We agree and intend to determine measure applicability
based on claims data whenever possible. Absent claims data we would use
other identifying factors such as specialty designation. Generally, we
define ``applicable'' to mean measures relevant to a particular MIPS
eligible clinician's services or care rendered. When we initially
proposed the specialty-specific measure sets we factored into
consideration both of the elements the commenter suggested.
Comment: A few commenters encouraged CMS to emphasize that
specialty-specific measures sets are intended as a helpful tool as
opposed to a required set of submissions. The
[[Page 77107]]
commenters believed it is simpler for all MIPS eligible clinicians to
report on six measures when they have eligible patients within the
denominators of the approved measures so that everyone meets the same
standards. Another commenter recommended that specialists and sub-
specialists be required to meet the same program expectations including
reporting on six measures. The commenter stated that if six measures
are not available in the sub-specialty list, the MIPS eligible
clinicians would need to report at the higher specialty level or cross-
cutting measure until they reach a total of six measures. If CMS allows
a lower number of quality measures for a particular specialty group in
MIPS, the lower number of measures for reporting should be available to
all MIPS eligible clinicians. If specialists and sub-specialists do not
report on six measures, the commenter recommended that they should
receive a score of zero for measures not reported.
Response: We agree with the commenters that specialty-specific
measure sets are intended to be helpful to MIPS eligible clinicians
under the MIPS program. While it may be simpler to require the same six
measures of all MIPS eligible clinicians, we do not believe it is
appropriate to hold MIPS eligible clinicians accountable for measures
that are not within the scope of their practice. The specialty-specific
measure sets includes measures from the comprehensive list of MIPS
quality measures available (Reference Table A). Measures within the
specialty-specific measure set should be more relevant for the
specialists and should be easier to identify and report. If a MIPS
eligible clinician does not believe the measures within a specialty-
specific measure set are relevant for their practice, they can choose
any six measures within the comprehensive quality measure list. If a
specialty measure set is further broken out by sub-specialty exists, we
would recommend that the MIPS eligible clinician should submit measures
within the sub-specialty set. We have made every effort to ensure the
sub-specialty set includes the relevant measures for the particular
sub-specialty.
Comment: Another commenter approved of the proposed specialty-
specific measures for the MIPS quality category and encouraged the
creation of more specialty-specific measure sets. The commenter stated
that currently, many specialty-specific measure sets have fewer than
six measures, and many also do not have any outcome based measures. In
addition, some of the specialty-specific measure sets have few or no
EHR submission-eligible measures. The commenter urged CMS to prioritize
e-specified measures currently listed as registry-only to enable
clinicians to make maximum use of their CEHRT for reporting. The
commenter also requested that CMS clarify MIPS eligible clinicians'
obligations for quality measure reporting when no single reporting
method will meet the reporting requirements even though the full
specialty-specific measure set would do so.
Response: We thank the commenter for their support of specialty-
specific measure sets. It is our intent to adopt more specialty-
specific measure sets over time, especially as new measures become
available. Although some of the specialty-specific measure sets do not
all have six measures they all contain an outcome or other high
priority measure. When a MIPS eligible clinician chooses to report a
specialty-specific measure set they are only required to report what is
in the set and what is reportable through the selected data submission
mechanism. We note, in rare situations where a MIPS eligible clinician
submits data for a performance category via multiple submission
mechanisms (for example, submits data for the quality performance
category through a registry and claims), we would score all the options
(such as scoring the quality performance category with data from a
registry, and also scoring the quality performance category with data
from claims) and use the highest performance category score for the
MIPS eligible clinician final score. We would not however, combine the
submission mechanisms to calculate an aggregated performance category
score. We refer readers to section II.E.6. of this final rule with
comment period for more information on scoring. Lastly, we agree with
the commenter that eCQMs are a priority, and we intend to continue
adopting additional measures of this type on the future. We intend to
continue leveraging MIPS eligible clinicians' use of CEHRT for quality
reporting requirements to the greatest extent possible.
Comment: A few commenters supported CMS' focus on outcome measures,
and specifically supported CMS' proposal to require MIPS eligible
clinicians to report on at least one outcome measure and to allow MIPS
eligible clinicians to earn two additional points for each additional
outcome measure reported because the commenters stated that outcome
measures provide more meaning and value for Medicare beneficiaries and
are critical for delivering high quality care. Several other commenters
commended CMS' plan to increase the requirements for reporting outcome
measures over the next several years through future rulemaking, as more
outcome measures become available. The commenter recommended that CMS
consider accelerating the implementation of additional outcome or high
quality measures, and expressed support for additional bonus points
awarded to MIPS eligible clinicians for reporting additional outcome or
high quality measures. One commenter agreed that outcome measures
should be emphasized in the future, as these are the true indicators of
healthcare services reflected directly on a patient's health status.
Another commenter recommended that CMS develop of both clinical
outcomes (for example, survival for patients with cancer and other life
threatening conditions) and patient-reported outcome measures (for
example, quality of life, functional status, and patient experience) to
support this aim.
Response: We thank the commenters and agree; we believe outcome
measures are critical to quality improvement. We will take the
commenters' suggestions into consideration for future rulemaking.
Comment: Other commenters stated that if quality is based on good
outcomes, MIPS eligible clinicians may deter treating the sickest
patients since it will negatively impact their numbers, thereby
resulting in sick patients not receiving timely and proper treatment
and increasing national medical expenditures.
Response: We have confidence in the clinician community and its
commitment to their patients' overall wellbeing. To date, there is no
evidence from the PQRS, VM, or Medicare EHR Incentive Program for EPs
that clinicians have been deterred from seeing all types of patients
seeking their care. We also note that many outcomes measures are risk-
adjusted to account for beneficiary severity prior to treatment. We do
recognize this issue is a concern for some stakeholders and will
monitor MIPS eligible clinicians' performance under the MIPS for this
unintended consequence.
Comment: A few commenters recommended that CMS set limits on some
of the measures that may be reported by multiple MIPS eligible
clinicians with respect to one patient. For example, many beneficiaries
will see multiple MIPS eligible clinicians. Hypothetically, the
commenters believed it would not be appropriate for the body mass index
(BMI) measure to be reported by a patient's primary care
[[Page 77108]]
physician, cardiologist, endocrinologist, ophthalmologist, and
rheumatologist in the same year.
Response: We thank the commenters for the suggestion and will take
it into consideration in the future.
Comment: Another commenter opposed CMS' overall policy to attempt
to assess patient experience and satisfaction under the quality
performance category of MIPS with outcomes-based measures. The
commenter stated that these measures and surveys include factors that
may be outside the control of the MIPS eligible clinician, such as
hospital nursing and staff behavior and performance and wait times in a
hospital setting due to inadequate staffing levels and physical plant
design. Also, patient satisfaction, while important, does not always
correlate with better clinical outcomes and may even conflict with
clinically indicated treatments. Another commenter believed patients
should be asked to report outcomes across a continuum of care domains
including treatment benefit, side effects, symptom management, care
coordination, shared decision-making, advanced care planning, and
affordability.
Response: We respectfully disagree and believe that outcomes-based
measures and high priority measures are critical to measuring health
care quality. We thank the commenter also for their thoughts on patient
satisfaction surveys, but we believe it is appropriate to measure and
incentivize directly MIPS eligible clinicians' performance on patient
experience surveys which uniquely present patients the opportunity to
assess the care that they received. There is evidence that performance
on patient experience surveys is positively correlated with better
patient outcomes. We intend to continue working with stakeholders to
improve available measures.
Comment: Other commenters stated the measures in the physical
medicine specialty-specific measure set are all process measures and
that the only way one can report on six out of seven measures is via a
registry. Although the measures could be applicable to some Physical
Medicine and Rehabilitation (PM&R) physicians, the commenters believed
they are not applicable to all PM&R MIPS eligible clinicians. The
commenters urged CMS to remove the specialty-specific measure set and
work with American Academy of Physical Medicine and Rehabilitation
(AAPM&R) on identifying better measurements for their specialty.
Response: If MIPS eligible clinicians find that the measures within
a specialty-specific measure set are not applicable to their practice,
they may report any of the measures that are available under the MIPS
program. We believe that the physical medicine specialty-specific
measure set is applicable to PM&R MIPS eligible clinicians and that
this policy appropriately accommodates those MIPS eligible clinicians
that are unable to report the full specialty-specific measure set.
Although all measures within the specialty-specific measure set may not
be applicable to all PM&R clinicians, we believe that most PM&R
clinicians will be able to report the measures within the set because
they are relevant for most with the specialty. If an MIPS eligible
clinician finds that they are unable to report the specialty-specific
measure set, they are able to report any six measures from the larger
quality measure set. We will continue to work with specialty societies
to adjust the specialty-specific measure sets as more relevant measures
become available. We also welcome specific feedback from MIPS eligible
clinicians who are specialists on what quality measures would be most
appropriate for their specialty-specific measure set.
Comment: Another commenter supported the reporting of specialty-
specific measure sets as meeting the full requirements in the quality
performance category because specialty MIPS eligible clinicians
struggle to meet many other measures outside their domain and should
not be penalized for not going outside their specialty by having to
find additional measures to report that may not be appropriate for the
care they provide.
Response: We thank the commenter for their support. We note that
the only additional measure that would be calculated as part of an MIPS
eligible clinician's quality score is the population-based measure
which does not require any data submission, reflected in Table B of the
Appendix in this final rule with comment period, which only applies to
groups of 16 or greater. For more information on this measure we refer
readers to the Global and Population-Based Measures section below.
Comment: Several commenters suggested that quality measurement and
reporting must measure things that are clinically meaningful and should
emphasize outcomes over process measures. The commenters added that
quality measurement should also incorporate patient experience measures
and patient-reported outcomes measures (PROMs), and quality measures
should be disaggregated by race/ethnicity, gender, gender identity,
sexual orientation, age, and disability status. Another commenter
recommended that patient-reported outcome measures (PROMs) be given
greater weight in the MIPS program. Other commenters encouraged the
inclusion of medication adherence measures beyond those currently
included under the quality performance category.
Response: We agree with commenters that quality measurement must
capture clinically-meaningful topics. We further agree that patient-
reported measures are important and we have included a number of PROMs
in MIPS. We intend to expand their portfolio in the future. We will
consider the commenter's suggestions on quality measure demographics
and medication adherence measures, particularly in the context of risk-
adjustment, and increased weighting in the future.
Comment: A few commenters recommended that CMS provide an incentive
to MIPS eligible clinicians to submit eCQMs and not deter MIPS eligible
clinicians from using CEHRT for eCQMs. The commenters recommended that
CMS provide an exemption on reporting a cross-cutting ensure for MIPS
eligible clinicians who use CEHRT/health IT vendors to report eCQMs for
the quality performance category.
Response: We thank the commenters for these suggestions. We refer
the commenter to section II.E.6. of this final rule with comment period
where we describe our policies for bonus points available for using
CEHRT in a data submission pathway that to report patient demographic
and clinical data electronically from end to end. An exemption on
reporting a cross-cutting measure is not necessary considering our
decision not to finalize a requirement to report a cross-cutting
measure.
Comment: One commenter urged CMS to maintain greater control of the
reporting under Quality Payment Program and to provide more thoroughly
defined measurements. They also urged CMS to incorporate more reporting
requirements that would assess the actual and overall quality of care
being provided to beneficiaries.
Response: We thank the commenter for the feedback. We have
structured the MIPS program to rely on the MIPS eligible clinician's
choice of specialty, which remains in the clinician's control, and
which we expect reflects the services that they provide, as well as the
quality measures that those MIPS eligible clinicians select. The
quality measures go through a rigorous review process to assure they
are thoroughly defined measurements as discussed in section II.E.5.c.
of this final rule with
[[Page 77109]]
comment period. We believe the MIPS program is designed to assess
actual and overall quality of care being provided to the beneficiaries.
Comment: Other commenters stated their small staff does not have
time to spend on reporting quality metrics.
Response: It has been our intention to adopt measures that are as
minimally burdensome as possible. We have also adopted several other
policies for smaller practices in order to ensure that MIPS does not
impose significant burdens on them. We encourage the commenters to
contact the Quality Payment Program Service Center for assistance
reporting applicable measures.
Comment: One commenter believed that some flexibility in reporting
requirements under quality would be helpful, especially for small
practices, but encouraged CMS to balance the need for flexibility
against the need for consistent reporting across MIPS eligible
clinicians. Another commenter stated that CMS should allow small
practices to report a smaller number of quality measures, at least for
the initial few years.
Response: We thank the commenter. We have attempted to be flexible
with the measures that we have adopted under MIPS. It has been our
intention to adopt measures that are as minimally burdensome as
possible. We have also adopted several other policies for smaller
practices in order to ensure that MIPS does not impose significant
burdens on them.
Comment: Another commenter supported narrowing the requirements for
improving quality measurement and reporting for MIPS based on data
collected as a natural part of clinical workflow using health
information technology.
Response: We will take this comment into account in the future. We
believe that electronic quality measurement is an important facet of
quality programs more generally.
Comment: One commenter supported allowing flexibility for MIPS
eligible clinicians to choose measures that are relevant to their type
of care.
Response: We thank the commenter and agree.
Comment: Other commenters encouraged CMS and Health Resources and
Services Administration (HRSA) to align the quality measurement
sections of MIPS and the Uniform Data System so that FQHCs can submit
one set of quality data one time for both purposes.
Response: We thank the commenters for their suggestion and will
examine this option for future rulemaking. Please refer to section
II.E.1.d. of this final rule with comment period for more information
regarding FQHCs.
Comment: Some commenters requested that CMS clarify the proposal to
eliminate the need to track and report duplicative quality measures by
modifying its proposal to require that if quality is reported in a
manner acceptable under MIPS or an APM, it would not need to be
reported under the Medicaid EHR Incentive Program. The commenters were
concerned the programs could potentially cause the same conflict CMS
specifically noted MIPS and APMs were intended to correct.
Response: We thank the commenters and have worked to eliminate
duplicative measures between MIPS and other programs where possible. We
intend to continue to align MIPS and the Medicaid EHR Incentive Program
to the greatest extent possible. As we have noted in section II.E.5.g.
of this final rule with comment period, the requirements for the
Medicaid EHR Incentive Program for EPs were not impacted by the MACRA.
There is a requirement to submit CQMs to the state as part of a
successful attestation for the Medicaid EHR Incentive Program. While
the MIPS objectives for the advancing care information performance
category are aligned to some extent with the Stage 3 objectives in the
Medicaid EHR Incentive Program, they are two distinct programs, and
reporting will stay separate.
Comment: Another commenter stated that while the quality section
discusses outcome measures, much of the measures are traditional,
clinic based process measures. The commenter was unclear how such
measures will drive transformation.
Response: We currently have approximately 64 outcome measures
available from which MIPS eligible clinicians may choose. We do agree
that more work needs to occur on outcome measure development to impact
the quality of care provided. As additional outcome measures are
developed, we will incorporate these for future rulemaking.
Comment: One commenter agreed that moving to more ``high value''
measures or ``measures that matter'' is important. However, the
commenter recommended that neurologists be able to select measures that
have the greatest value in driving improvement for their patients. The
commenter stated that measures considered ``high value'' may differ by
specialty or patient population.
Response: We appreciate the commenters support. We recommend that
all MIPS eligible clinicians select measures that have the greatest
value in driving improvement for their patients.
Comment: Another commenter suggested that MIPS eligible clinicians
who report different quality measures from the prior year should be
requested to provide the rationale for the change. The commenter
suggested that CMS request the MIPS eligible clinician report data for
the same categories as the prior year to preclude the chance that a
MIPS eligible clinician may be seeking to find loopholes and flaws in
the system.
Response: We appreciate the suggestion and will take it into
consideration for future years of the program. We will also monitor
whether clinicians appear to be switching measures to improve their
scores, rather than due to changing medical goals or patient
populations. We will report back on the results of our monitoring in
future rulemaking.
Comment: A few commenters requested that MIPS eligible clinicians
reporting quality using third party submission mechanisms not certified
to all available measures only be required to report from the list of
measures to which the system is certified. That is, receive an
exemption from standard reporting requirements similar to the
flexibility built in for others who lack reportable measures.
Response: We respectfully disagree that an exemption is necessary
in the circumstance the commenters describe. MIPS eligible clinicians
choosing to report data via third party intermediary should select an
entity from the list of qualified vendors that is able to report on the
quality metrics that MIPS eligible clinician believes are most
appropriate for their practice and that they wish to report to CMS.
Comment: Some commenters encouraged CMS to further evaluate the use
of more than one measure, which must be an outcome measure or a high
priority measure, when more than one measure exists and each measures a
distinct and different health outcome; and if an applicable outcome
measure is not available, another high priority measure (appropriate
use, patient safety, efficiency, patient experience, and care
coordination measures) in lieu of an outcome measure should be
considered. Thus, the commenters recommended that CMS consider the
requirement of two (or more) outcome or high quality measures, as a
component of the final score, when available.
Response: Thank you for the feedback, and we will consider this in
future rulemaking. We also want to refer this commenter to section
II.E.6.a.(2) of this final rule with comment period
[[Page 77110]]
where we describe the bonus points available for high priority measures
and section II.E.5.b.(3)(a) of this final rule with comment period
where we describe our interest in increasing the emphasis on outcome
measures moving forward.
Comment: Other commenters urged CMS to continue to include process
measures in quality reporting programs while testing relevant outcomes
measures for future inclusion. Specifically, the commenters were
concerned that a small number of orthopedic surgery outcomes measures
currently exist and believed that more time is required to develop
relevant outcomes measures before CMS emphasizes outcomes for specialty
clinicians.
Response: MIPS eligible clinicians are required to submit data on
an outcome measure if available, but if not, another high priority
measure may be selected. We agree with the commenter that additional
outcome measure development needs to occur.
Comment: A few commenters wanted to know if there would be any
impacts (beyond loss of points) if a MIPS eligible clinician chooses to
not report any outcome or high priority condition measures.
Response: The commenters are correct that the only impacts for not
submitting outcomes or high priority measures would be a loss of points
under the quality performance category.
Comment: Several commenters recommended that CMS reinstate measures
group reporting as an option under MIPS. The commenters stated that by
removing this option CMS has skewed reporting in favor of large group
practices, the majority of whom report through the GPRO web-interface
that allows for and requires reporting on a sampling of patients. One
commenter noted that while measure groups are not the most popular
reporting option in PQRS, MIPS eligible clinicians choosing this option
have had a high success rate and that measures included in a measures
group undergo a deliberate process that ensures a comprehensive picture
of care is measured. One commenter indicated many oncology small
practices use the measure group reporting mechanism which is less
burdensome and a meaningful mechanism for quality reporting for these
practices. Another commenter requested that small practices be able to
continue reporting measures groups on 20 patients. Some commenters
stated by doing away with the measures group quality reporting option,
CMS has actually made this category more difficult for many clinicians
to meet, particularly those in small practices. Another commenter
requested CMS retain the asthma and sinusitis measure groups as
currently included in PQRS.
Response: We did not propose the measures group option under MIPS
because, as commenters noted, very few clinicians utilized this option
under PQRS. Under the MIPS, we substituted what we believe to be a more
relevant selection of measures through specialty-specific measure sets.
Adopting this policy also enables a more complete picture of quality
for specialty practices. We do not believe the specialty-specific
measure set will pose an undue burden on small practices, and may make
it easier for eligible clinicians, including those in small practices,
to easily identify quality measures to report to MIPS. We will continue
to assess this policy for enhancements in future rulemaking.
Comment: Other commenters stated the quality requirements are ill-
conceived and unworkable and the severity of illness calculations
unfair (for example, if MIPS eligible clinicians do a good job
preventing complications, they are punished with a low score).
Response: We believe that the quality measures we are adopting for
the MIPS program will appropriately incentivize high quality care,
including care that prevents medical complications. However, we will
monitor the MIPS program's effects on clinical practices carefully.
Comment: Some commenters supported CMS' proposals to require MIPS
eligible clinicians to report only six measures and to remove the NQS
domain requirement for selecting measures as compared to the PQRS, but
opposed CMS' proposed requirement that MIPS eligible clinicians report
on outcomes and high priority measures. The commenters recommended that
CMS incentivize outcomes based measures by assigning them more weight
within MIPS. Additionally, the commenters were concerned that many
specialties do not have access to outcome measures. The commenters
opposed requiring patient experience and satisfaction measures for MIPS
eligible clinicians, noting that evaluating patient experience is best
done using confidential feedback to clinicians. The commenters would
support CMS' use of the patient satisfaction surveys under the
improvement activities performance category if performance was based
only on administering a survey, evaluating results, and addressing the
findings of the survey. The commenters encouraged CMS to give funding
preference for development of measures to those specialties with
limited measures. Another commenter recommended requiring the inclusion
of patient centered measures that reflect the values and interests of
patients, including patient reported outcome measures, patient
experience of care, cross cutting measures, and clinical outcome
measures.
Response: We thank the commenters for their support. However, we do
believe that outcome measures and high priority measures are critical
to measuring health care quality, and are designated high priority for
that reason. We thank the commenter also for their thoughts on patient
satisfaction surveys, but we believe it is appropriate to measure and
incentivize directly MIPS eligible clinicians' performance on patient
experience surveys. We intend to continue working with stakeholders to
improve available measures. We would like to explain for commenters
that the CAHPS for MIPS survey is included under the quality
performance category, as well as the improvement activities performance
category as a high-weighted activity in the Patient Safety and Practice
Assessment subcategory noted in Table H of the Appendix in this final
rule with comment period.
Comment: The commenters requested further clarification on the
number of measures required when specialty-specific measure sets are
used. For example, if a non-patient facing MIPS eligible clinician
submits all measures from a specialty-specific measure set (in Table E
of the Appendix), would they still be allowed to submit other measures
applicable to their practice, such as cross-cutting measures? In a
scenario where an MIPS eligible clinician submits all three available
measures in a specialty-specific measure set and also submits one
cross-cutting measure not listed in the a specialty-specific measure
set (therefore submitting a total of four measures), will the MIPS
eligible clinician be penalized for not submitting six total measures?
The commenters requested that the final rule with comment period
include specific requirements on the number of measures required for
MIPS eligible clinicians who elect to submit measures from a specialty-
specific measure set.
Response: We would like to explain that our final policy for
quality performance category is for the applicable continuous 90-day
performance period during the performance period, or longer if the MIPS
eligible clinician chooses, the MIPS eligible clinician or group will
report one specialty-specific measure set, or the measure set defined
at the
[[Page 77111]]
subspecialty level, if applicable. If the measure set contains less
than six measures, MIPS eligible clinicians will be required to report
all available measures within the set. If the measure set contains six
or more measures, MIPS eligible clinicians will be required to report
at least six measures within the set. We note that generally, we define
``applicable'' to mean measures relevant to a particular MIPS eligible
clinician's services or care rendered.
Regardless of the number of measures that are contained in the
measure set, MIPS eligible clinicians reporting on a measure set will
be required to report at least one outcome measure or, if no outcome
measures are available in the measure set, report another high priority
measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) within the measure set in
lieu of an outcome measure. For the commenter's specific questions,
there is no penalty or harm in submitting more measures than required.
Rather, this can benefit the clinician because if more measures than
the six required are submitted, we would score all measures and use
only those that have the highest performance, which can result in a
MIPS eligible clinician receiving a higher score. Lastly, we note that
since we are not finalizing the requirement of cross-cutting measures
in the quality performance category, there is no difference in
requirements for patient facing and non-patient facing clinicians in
the quality performance category.
Comment: One commenter supported the flexibility provided for non-
patient facing MIPS eligible clinicians; however, the commenter
suggested that CMS continue to keep in mind that most measures across
the MIPS components apply to patient-facing encounters. The commenter
recommended that CMS work with medical specialty and subspecialty
groups to determine how to best expand the availability of clinically
relevant performance measures for non-patient facing MIPS clinicians,
or ways to reweight MIPS scoring to provide these clinicians with
credit for activities that more accurately align with their role in the
treatment of a patient.
Response: We appreciate the commenters' suggestions and will take
them into consideration in future rulemaking. We would like to explain
that we consistently work closely with specialty societies and intend
to continue engaging with them on future MIPS policies.
Comment: Several commenters supported the decision from CMS to
reduce the number of mandatory quality measures for reporting from nine
to six, and appreciated steps to clarify reporting requirements when
fewer than six applicable measures are available. Some commenters
believed that the best approach when directly applicable measures are
not available is to minimize the number of measures required for
reporting and focus instead on the measures that do apply to the
clinician and patient. Additionally, these commenters stated there is
value in the stratification of data across different identifiers,
particularly for some gastrointestinal (GI) services with differential
impacts across patient groups; however, the lack of existing data
related to factors such as ethnicity and gender makes data
stratification particularly difficult and often irrelevant. The
commenters requested that CMS engage in an open dialogue once
recommendations are received from the ASPE if they believe it necessary
to move forward with proposals impacting GI care.
Response: We appreciate the commenters support. We have an open
dialogue and appreciate feedback from all federal agencies and
stakeholders. We will closely examine the ASPE studies when they are
available and incorporate findings as feasible and appropriate through
future rulemaking. We look forward to working with stakeholders in this
process.
Comment: One commenter supported the goals for meaningful
measurement but indicated that there are challenges to implementing
policies to achieve them, including the proposed quality performance
category which is overly complex, largely unattainable, lacks
meaningful measures, lacks transparency and lacks appropriate risk-
adjustment. The commenter recommended further collaboration with
specialty societies to create policies which will engage surgeons,
including surgeons who were unable to successfully participate in PQRS.
Response: We appreciate the comment. As stated above, we
consistently work closely with specialty societies to solicit measures
and we intend to continue engaging with them on future MIPS policies.
Comment: Some commenters requested that CMS allow flexibility
around outcome measure reporting requirements and allow suitable
alternatives where necessary, as many stakeholders still face barriers
in the development of and use of meaningful outcome measures. The
commenters discouraged CMS from assigning extra weight to outcome
measures, as there is no standard methodology for reporting and risk-
adjustment methodologies, which may unfairly disadvantage some MIPS
eligible clinicians and advantage others. The commenters supported
comprehensive measurement and consideration of measures in the IOM/NQS
Quality Domains.
Response: We appreciate the commenter's suggestions and will take
them into consideration in the future. However, to address the
commenter's concern regarding an unfair disadvantage for some eligible
clinicians as it relates to the availability and reporting of outcome
measures, we have provided flexibility of reporting for those eligible
clinicians that do not have access to outcome measures by allowing
eligible clinicians to report on high priority measures as well. Since
high priority measures span all eligible clinician specialties, we do
not believe some eligible clinicians will have an advantage of
reporting over others.
Comment: Another commenter asked CMS to clarify whether a measure
type listed as an `intermediate outcome' would count equally as an
`outcome' measure. Another commenter recommended that intermediate
outcome measures should only be counted as outcome measures if there is
a strong evidence base supporting the intermediate outcome as a valid
predictor of outcomes that matter to patients.
Response: We consider measures listed as an ``intermediate
outcome'' measure to be outcome measures. In addition, it is important
to note that if an applicable outcome measure is not available, a MIPS
eligible clinician or group would be required to report one other high
priority measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) in lieu of an outcome
measure.
Comment: Another commenter requested clarity on whether a clinician
is evaluated on the same six quality measures as the group they report
in. The commenter wanted to know what happens if one of those group
measures is not applicable to the clinician.
Response: MIPS eligible clinicians that report as part of a group
are evaluated on the measures that are reported by the group, whether
or not the group's measures are specifically applicable to the
individual MIPS eligible clinician. In addition, MIPS eligible
clinicians who form a group, but have elected to report as individuals,
will each be evaluated only on the measures they themselves report.
Comment: Some commenters were concerned about group reporting of
quality measures in multispecialty practices. Thus, the commenters
recommended that CMS allow MIPS
[[Page 77112]]
eligible clinicians in multi-specialty practices to report on measures
that are meaningful to their specialty, and that each MIPS eligible
clinician in a group be assessed individually, and all scores of the
MIPS eligible clinicians reporting under the same TIN be aggregated to
achieve one score for the entire practice.
Response: We appreciate the commenter's suggestions. From the
example provided, we would recommend that clinicians in this situation
may find reporting as individual MIPS eligible clinicians favorable
over reporting as a group. We will take these recommendations into
consideration in for future rulemaking.
Comment: One commenter recommended a cap of nine measures in the
future if CMS believes that allowing more than the required six is
needed.
Response: We appreciate the commenter's suggestion. We will take
this into consideration in the future.
Comment: A few commenters applauded CMS's extensive efforts to
include specialists in the quality component of MIPS. The commenters
recommended that CMS determine which specialties do not have enough
measures to select at least six that are not topped out and exempt
those specialists from the quality category until enough measures
become available. Some commenters were pleased that CMS recognized that
very specialized MIPS eligible clinicians may not meet all six
applicable measures.
Response: We appreciate the commenters support. MIPS eligible
clinicians who do not have enough measures to select at least six
measures should choose all of the measures that do apply to their
practice and report them. We will conduct a data validation process to
determine whether MIPS eligible clinicians have reported all measures
applicable to them if the MIPS eligible clinician does not report the
minimum required 6 measures. As an alternative, the MIPS eligible
clinician may choose a specialty-specific measure set. If the measure
set contains fewer than six measures, MIPS eligible clinicians will be
required to report all available measures within the set. If the
measure set contains six or more measures, MIPS eligible clinicians
will be required to report at least six measures within the set.
Regardless of the number of measures that are contained in the measure
set, MIPS eligible clinicians reporting on a measure set will be
required to report at least one outcome measure or, if no outcome
measures are available in the measure set, report another high priority
measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) within the measure set in
lieu of an outcome measure. Generally, we define ``applicable'' to mean
measures relevant to a particular MIPS eligible clinician's services or
care rendered. MIPS eligible clinicians who do not have six individual
measures available to them should select their appropriate specialty-
specific measure set, because that pre-defines which measures are
applicable to their specialty and provides protections to them. For the
majority of MIPS eligible clinicians choosing the specialty-specific
measure sets provides protections to MIPS eligible clinicians because
we have pre-determined which measures are most applicable, based on the
MIPS eligible clinicians specialty.
We do intend to provide toolkits and educational materials to MIPS
eligible clinicians that will reduce the burden on determining which
measures are applicable. We do not believe, however, that it is
appropriate to exempt specialties from the quality performance category
if they have fewer than six measures or topped out measures. Rather
these specialties are still able to report on quality measures, just a
lesser the number of measures. We refer the readers to section II.E.6.
of this final rule with comment period for the discussion of authority
under 1848(q)(5)(F) to reweight category weights when there are
insufficient measures applicable and available.
Comment: A few commenters requested clarification on whether the
measures are separate for each individual performance category such as
quality, and advancing care information or whether one measure can
apply to more than one category.
Response: Each measure and activity applies only for the
performance category in which it is reported. However, some actions
might contribute to separately specified activities, such as reporting
a quality measure through a QCDR, which may make it easier for the MIPS
eligible clinician to perform an improvement activity that also
involves use of a QCDR. However, it is important to note that the CAHPS
for MIPS survey receives credit in the quality and improvement
activities performance categories. In addition, certain improvement
activities may count for bonus points in the advancing care information
performance category if the MIPS eligible clinician uses CEHRT.
Comment: One commenter stated that while CMS has provided CPT codes
for consideration for PQRS in the past, it has not provided the type of
CPT codes to be used for MIPS assessment.
Response: The CPT codes that have historically been available under
the PQRS program will be made available for the MIPS as part of the
detailed measure specifications which will be posted prior to the
performance period at QualityPaymentProgram.cms.gov. More information
on the detailed measure specifications is available in section
II.E.5.c. of this final rule with comment period.
Comment: The commenter requested clarification as to whether a MIPS
eligible clinician is obligated to report on measures if the procedures
are performed in a surgery center or hospital.
Response: Yes, in the instances where those procedures or services
are billed under Medicare Part B or another payer that would have
services that fall under the measure's denominator, MIPS eligible
clinicians are required to report on measures where denominator
eligible patients are designated within the measure specification.
Comment: One commenter stated that in addressing CMS' question of
whether to require one cross-cutting measure and one outcome measure,
or one cross-cutting measure and one high priority measure (which is
inclusive of the outcome measures), the commenter recommended that CMS
allow MIPS eligible clinicians to select one cross-cutting and one high
priority measure. The commenter noted that this approach gives MIPS
eligible clinicians more flexibility and gives CMS time to develop
additional outcome measures to choose from.
Response: We appreciate the comment. However, we believe it is
important to include the requirement to report at least one outcome
measure if it is available given the importance of outcome measures on
assessing health care quality. As noted above, we are finalizing our
proposal to require one outcome measure, or if an outcome measure is
not available, another high priority measure. We are not finalizing our
proposal to require one cross-cutting measure.
Comment: Some commenters did not support CMS' proposal to require
the reporting of outcome/high priority measures in order to achieve the
maximum quality performance category points. The commenters recommended
that instead, CMS reward high priority measures with bonus points, but
cap the bonus points CMS Web Interface users can earn. The commenters
recommended their approach because more large practices can use the CMS
Web Interface option, which includes several high priority measures,
and this could favor these MIPS eligible
[[Page 77113]]
clinicians over those in smaller practices. Another commenter expressed
concern about CMS's requirement to report on high priority, including
specific outcomes based, and cross-cutting measures, and stated that
those standards are currently counterproductive due to inherent
difficulty with tracking outcomes in cancer care, in part because
meaningful outcomes often require years of follow-up, and because
sample sizes of cancer patients may be very small at the clinician
level. The commenter further noted that the vast majority of oncology
measures existing today are process-based versus outcomes based,
rendering an adjustment period for outcomes based measures in cancer
care. The commenter recommended that CMS clearly state in the final
rule with comment period that the outcomes measure reporting
requirement does not apply to oncology clinicians until more meaningful
quality measures are developed for oncology care.
Response: We would like to explain that our proposals do include
bonus points (subject to a cap) for reporting on high priority
measures; we refer readers to section II.E.6.a.(2)(e) of this final
rule with comment period. We believe that outcome measures and high
priority measures are critical to measuring health care quality, and
they are designated high priority for that reason. We intend to
continue working with stakeholders to improve available measures.
Comment: Other commenters believed that in order to allow and
encourage MIPS eligible clinicians to report the highest quality data
available, which includes outcomes measures in EHR and registry data,
and support innovation, CMS should allow MIPS eligible clinicians to
report at least one of the six required quality measures under MIPS
through a QCDR. Some commenters strongly encouraged CMS to move toward
a streamlined set of high priority measures that align incentives and
actions of organizations across the health care system. The commenters
also recommended that CMS give NQF-endorsed measures priority.
Response: We thank the commenters for their feedback and intend to
finalize our proposal that one of the six measures a MIPS eligible
clinicians must report on is an outcome measure. We also understand the
concerns that not all MIPS eligible clinicians may have a high priority
measure available to them. However, we do believe that all MIPS
eligible clinicians regardless of their specialty have a high priority
measure available for reporting. Therefore, we intend to finalize that
if a MIPS eligible clinician does not have an outcome measure
available, they are required to report on a high priority measure. In
addition, a QCDR is one of the data submission mechanisms available to
a MIPS eligible clinician to report measures.
Comment: A few commenters encouraged CMS to provide additional time
for small or mid-sized practices to transition to CEHRT and QCDRs by
ensuring that there are a sufficient number of measures available for
claims-based reporting, particularly in the quality performance
category, in the first several performance years under MIPS.
Response: We appreciate the commenter's concerns, and while we do
have the goal of ultimately moving away from the claims based
submission mechanism, we do recognize that this mechanism must be
maintained until electronic--based mechanisms of submission continue to
develop and mature.
Comment: One commenter wanted to ensure that the proposed reporting
does not detract from the patient--clinician clinical visit because it
is crucial for the patient-clinician relationship.
Response: We agree that the patient--clinician encounter is
paramount. Reporting can be captured through the EHR or through a
registry at a later time.
Comment: One commenter stated that the proposed guidelines cannot
be applied to all of the specialties and sub-specialties uniformly.
Response: We are assuming that the commenter is referring to the
proposed data submission requirements for the quality performance
category. We are providing flexibility on the submission mechanisms and
selection of measures by MIPS eligible clinicians because we understand
that varying specialties have differing quality measurement needs for
their practices.
Comment: Some commenters were concerned about lowering the
threshold on measures and thought the measure criteria were
insufficient. One commenter was also concerned that there was no
requirement for reporting on a core set of measures for every primary
care physician (PCP) and specialist.
Response: We respectfully disagree with the commenter. Drawing from
our experiences under the sunsetting programs, we believe that is more
important to ensure that clinicians are measured on quality measures
that are meaningful to their scope of practice as well as quality
measures that emphasize outcome measurement or other high priority
areas rather than a large quantity of measures.
Comment: One commenter asked for clarification on whether six non-
MIPS measures (QCDR) can be selected by a MIPS eligible clinician and
be used to meet the reporting criteria.
Response: Yes, this is allowable for reporting using QCDRs as long
as one of the selected measures is an outcome measure, or another high
priority measure if an outcome is unavailable.
Comment: Some commenters urged CMS to ensure the proposed
validation process to review and validate a MIPS eligible clinician's
inability to report on the quality performance category requirements--
similar to the Measure-Applicability Validation (MAV) process--is
transparent. The commenters urged consultation with clinician
stakeholders as CMS develops the new validation process, expressing
concerns related to the MAV, including the lack of clarity in how the
MAV actually functions. Another commenter recommended CMS develop a
validation process that will review and validate a MIPS eligible
clinician's or group's ability to report on a sufficient number of
quality measures and a specialty-specific sample set--with a sufficient
sample size--including both a cross-cutting and outcome measure. One
commenter requested a timeframe for the validation process so they may
prepare.
Response: We agree with the commenters and intend to provide as
much transparency into the data validation process for the quality
performance category under MIPS as technically feasible. The validation
process will be part of the quality performance category scoring
calculations and not a separate process as the MAV was under PQRS. We
refer readers to section II.E.6.a.(2) of this final rule with comment
period for more information related to the quality performance scoring
process. Lastly, we are working to provide additional toolkits and
educational materials to MIPS eligible clinicians prior to the
performance period that will ease the burden on identification of which
measures are applicable to MIPS eligible clinicians. If the MIPS
eligible clinician required assistance, they may contact the Quality
Payment Program Service Center.
Comment: Another commenter recommended delegating each medical
specialty the task of choosing three highly desirable outcomes to focus
on each year and rewarding those outcomes to promote quality in lieu of
using 6-8 dimensions of meaningful use
[[Page 77114]]
performance combined with numerous quality indicators.
Response: We agree with the commenter that focusing on outcomes and
outcome measurement is important, as we have indicated in this final
rule with comment period. We are however required by statute to measure
MIPS eligible clinician's performance on four performance categories,
which quality and advancing care information are a part of.
Comment: One commenter stated that claims data is misleading and
may corrupt attempts to analyze information with ``big data''
approaches, because a significant proportion of claims data only
captures the first four codes that a clinician enters into the medical
record. The commenter further noted that many clinicians documented
numerous diagnoses into the medical record, unaware that some vendors
only accept the first four diagnoses and that some EHR systems arrange
diagnoses in alphabetical order despite how the clinician entered them.
The commenter suggested CMS mandate no restriction on the number of
diagnoses entered into the 1500 Health Insurance claim form--or at
least mandate the National Uniform Claim Committee (NUCC)
recommendation to expand the maximum amount of diagnoses from four to
eight.
Response: Although the commenter's recommendation is outside the
scope of the proposed rule, we note that we do not believe that this
approach compromises either data mining or claims processing.
Comment: One commenter requested CMS provide guidance regarding the
treatment of measures that assess services that are not Medicare
reimbursable, such as postpartum contraception. The commenter
recommended that CMS adopt the measures in the Medicaid Adult and Child
Core Sets that have been specified and endorsed at the clinician level.
Response: We agree that working to align MIPS quality measures with
Medicaid is important and intend to develop a ``Medicaid measure set''
that will be based on the existing Medicaid Adult Core Set (https://www.medicaid.gov/Medicaid-CHIP-Program-Information/By-Topics/Quality-of-Care/Downloads/Medicaid-Adult-Core-Set-Manual.pdf). Further, we
believe it is important to have MIPS quality measure alignment with
private payers and have engaged a Core Quality Measure Collaborative
(https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityMeasures/Core-Measures.html) to develop measures to
be used both by private payers and the MIPS program. Our strategic
interest is a future state where measurement in multi-payer systems,
Medicaid, and Medicare can be seamlessly integrated into CMS programs.
After consideration of the comments regarding our proposal on
submission criteria for quality measures excluding CMS Web Interface
and CAHPS for MIPS, we are finalizing at Sec. 414.1335(a)(1) that
individual MIPS eligible clinicians submitting data via claims and
individual MIPS eligible clinicians and groups submitting via all
mechanisms (excluding CMS Web Interface, and for CAHPS for MIPS survey,
CMS-approved survey vendors) are required to meet the following
submission criteria. For the applicable period during the performance
period as discussed in section II.E.5.b.(3) of this final rule with
comment period, the MIPS eligible clinician or group will report at
least six measures including at least one outcome measure. If an
applicable outcome measure is not available, the MIPS eligible
clinician or group will be required to report one other high priority
measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) in lieu of an outcome
measure. If fewer than six measures apply to the individual MIPS
eligible clinician or group, then the MIPS eligible clinician or group
will be required to report on each measure that is applicable. We
define ``applicable'' to mean measures relevant to a particular MIPS
eligible clinician's services or care rendered.
Alternatively, for the applicable performance period in 2017, the
MIPS eligible clinician or group will report one specialty-specific
measure set, or the measure set defined at the subspecialty level, if
applicable. If the measure set contains fewer than six measures, MIPS
eligible clinicians will be required to report all available measures
within the set. If the measure set contains six or more measures, MIPS
eligible clinicians will be required to report at least six measures
within the set. Regardless of the number of measures that are contained
in the measure set, MIPS eligible clinicians reporting on a measure set
will be required to report at least one outcome measure or, if no
outcome measures are available in the measure set, report another high
priority measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) within the measure set in
lieu of an outcome measure. MIPS eligible clinicians may choose to
report measures in addition to those contained in the specialty-
specific measure set will not be penalized for doing so, provided such
MIPS eligible clinicians follow all requirements discussed here.
In accordance with Sec. 414.1335(a)(1)(ii), MIPS eligible
clinicians and groups will select their measures from either the list
of all MIPS measures in Table A of the Appendix in this final rule with
comment period, or a set of specialty-specific measure set in Table E
of the Appendix in this final rule with comment period. Note that some
specialty-specific measure sets include measures grouped by
subspecialty; in these cases, the measure set is defined at the
subspecialty level.
We also are finalizing the definition of a high priority measure at
Sec. 414.1305 means an outcome, appropriate use, patient safety,
efficiency, patient experience, or care coordination quality measures.
These measures are identified in Table A of the Appendix in this final
rule with comment period.
We are not finalizing our proposal to require MIPS eligible
clinicians and groups to report a cross-cutting measure because we
believe we should provide flexibility during the transition year of the
program as MIPS eligible clinicians adjust to MIPS. However, we are
seeking comments on adding a requirement to our modified proposal that
patient-facing MIPS eligible clinicians would be required to report at
least one cross-cutting measure in addition to the high priority
measure requirement for further consideration for MIPS year 2 and
beyond. We are interested in feedback on how we could construct a
cross-cutting measure requirement that would be most meaningful to MIPS
eligible clinicians from different specialties and that would have the
greatest impact on improving the health of populations.
(ii) Submission Criteria for Quality Measures for Groups Reporting via
the CMS Web Interface
We proposed at Sec. 414.1335 the following criteria for the
submission of data on quality measures by registered groups of 25 or
more MIPS eligible clinicians who want to report via the CMS Web
Interface. For the applicable 12-month performance period, we proposed
that the group would be required to report on all measures included in
the CMS Web Interface completely, accurately, and timely by populating
data fields for the first 248 consecutively ranked and assigned
Medicare beneficiaries in the order in which they appear in the group's
sample for each module/measure. If the pool of eligible assigned
beneficiaries is less than 248, then the group would
[[Page 77115]]
report on 100 percent of assigned beneficiaries. A group would be
required to report on at least one measure for which there is Medicare
patient data. We did not propose any modifications to this reporting
process. Groups reporting via the CMS Web Interface are required to
report on all of the measures in the set. Any measures not reported
would be considered zero performance for that measure in our scoring
algorithm.
Lastly, from our experience with using the CMS Web Interface under
prior Medicare programs we are aware groups may register for this
mechanism and have zero Medicare patients assigned and sampled to them.
We note that should a group have no assigned patients, then the group,
or individual MIPS eligible clinicians within the group, would need to
select another mechanism to submit data to MIPS. If a group does not
typically see Medicare patients for which the CMS Web Interface
measures are applicable, or if the group does not have adequate billing
history for Medicare patients to be used for assignment and sampling of
Medicare patients into the CMS Web Interface, we advise the group to
participate in the MIPS via another reporting mechanism.
As discussed in the CY 2016 PFS final rule with comment period (80
FR 71144), beginning with the 2017 PQRS payment adjustment, the PQRS
aligned with the VM's beneficiary attribution methodology for purposes
of assigning patients for groups that registered to participate in the
PQRS Group Reporting Option (GPRO) using the CMS Web Interface
(formerly referred to as the GPRO Web Interface). For certain quality
and cost measures, the VM uses a two-step attribution process to
associate beneficiaries with TINs during the period in which
performance is assessed. This process attributes a beneficiary to the
TIN that bills the plurality of primary care services for that
beneficiary (79 FR 67960-67964). We proposed to continue to align the
2019 CMS Web Interface beneficiary assignment methodology with the
measures that used to be in the VM: The population quality measures
discussed in the proposed rule (81 FR 28188) and total per capita cost
for all attributed beneficiaries discussed in proposed rule (81 FR
28188). As MIPS is a different program, we proposed to modify the
attribution process to update the definition of primary care services
and to adapt the attribution to different identifiers used in MIPS.
These changes are discussed in the proposed rule (81 FR 28188). We
requested comments on these proposals.
The following is summary of the comments we received regarding our
proposal on submission criteria for quality measures for groups
reporting via the CMS Web Interface.
Comment: Some commenters supported the general direction and intent
of the proposed quality performance category, and particularly
supported CMS's alignment between the CMS Web Interface measure set and
the quality measure reporting and performance requirements for the
Medicare Share Savings Program Tier 1 organizations. Another commenter
supported national alignment of quality measures.
Response: We thank the commenters for their support.
Comment: Another commenter stated that CMS should either remove or
modify some of the quality measures used as part of CMS Web Interface,
as existing criteria make them difficult to achieve for large group
practices and may not reflect current recommendations. The commenter
provided examples of three specific measures and why they present
challenges to practice in the context of large groups using CMS Web
Interface. For example, the commenter stated that the depression
remission measure (MH-1) measures the number of patients with major
depression, as defined as an initial PHQ-9 score > 9, who demonstrate
remission at 12 months, as defined as a PHQ-9 score < 5. The
requirement for PHQ-9 use for evaluating patients combined with a
follow-up evaluation is problematic for many large group practices. The
measure must be recorded for 248 patients, a very difficult bar for
large multi-specialty group practices which refer patients for
treatment and follow-up to psychiatrists if they have a PHQ of 9. The
measure seems to be designed for group practices that do not have this
type of referral pattern to psychiatrists.
Another problematic example the commenter provided was the
medication safety measure (CARE 3). The commenter stated that the score
includes all medications the patient is taking, including over-the-
counter and herbal medications, and therefore relies on the patient
recalling and accurately reporting this information. For each
medication on the list, clinicians must include the dose, route (for
example, by mouth or by injection), and frequency. This measure is
difficult to meet, even if medication lists are substantially complete.
According to the specifications, if a multi-vitamin is listed but ``by
mouth'' is not recorded then the encounter(s) is scored as non-
performance. Finally, the commenter believed that the blood pressure
measure must be updated to reflect recent national consensus about
appropriate blood pressure measurements. The commenter stated that a
national consensus has developed that blood pressure should vary by age
and diagnosis. However, the measure requires a strict policy of
controlling to less than 140/90 for hypertensive patients, regardless
of age, and 120/80 for screening purposes. These levels are not
consistent with current medical evidence or opinion such as those noted
in the Eighth Joint National Committee.
Response: We do not believe it appropriate to remove or modify
measures, including the three mentioned by the commenter, used in the
CMS Web Interface reporting. On the three specific measures the
commenter listed, we have been working with the multi-stakeholder
workgroup for the Core Measure Quality Collaborative (CQMC). These
measures are included in the CQMC measure set for ACO and certified
patient-centered medical homes. To align with the CQMC set, CMS has
included these measures within the CMS Web Interface. We believe all
measures within the CMS Web Interface are appropriate for the data
submission method and level of reporting.
Comment: A few commenters recommended, to ensure comparability
across reporting mechanisms, that CMS should allow groups reporting
through the CMS Web Interface to select which six quality measures will
be used to calculate the quality performance score. Currently, the CMS
Web Interface requires 18 measures, so if a group performs highly on
some CMS Web Interface measures but not others, their overall quality
score will be lowered.
Response: We thank the commenters for this feedback, but we believe
that requiring groups to report all measures included in the CMS Web
Interface provides us a more complete picture of quality at a given
group practice. All of the measures reported on the CMS Web Interface
will be used to determine an overall quality performance category
score.
Comment: Other commenters expressed that CMS Web Interface
reporting should be coupled with useful reports for MIPS eligible
clinicians including timely and actionable claims data in order to make
value-based decisions.
Response: We do not believe it to be operationally feasible to
provide claims data as part of a report for the transition year of the
MIPS; however, we will work to provide as much information to MIPS
eligible clinicians as possible and
[[Page 77116]]
will consider this request for future rulemaking.
Comment: Some commenters suggested that CMS identify a minimum
number of beneficiaries to report on through CMS Web Interface based on
the number of MIPS eligible clinicians in the group.
Response: We appreciate the comment, and in past years under the
PQRS program there were different beneficiary sample sizes based on the
size of the group, specifically a sample of 411 patients for groups
100+ and a sample of 248 patients for groups 25-99. However after
additional data analysis, we found that the differing sample sizes made
no impact on the group's performance, so we modified the sample to 248
patients in the CY 2015 final rule (79 FR 67789). We do not believe it
reduces burden by issuing different sample sizes by groups. Rather, we
believe that a larger sample size is more burdensome.
Comment: Another commenter had concerns about the statistical
accuracy of the requirement for reporting the first 248 patients. The
commenter had particular concerns about regional and seasonal bias for
larger groups because performance measures for large groups would be
based on data from patients in the first few weeks of the year.
Response: The methodology for sampling and assignment for the CMS
Web Interface has been tested extensively, and we believe that the
methodology appropriately controls for the biases the commenter
suggests. However, we will monitor performance data reported via the
CMS Web Interface.
Comment: Some commenters recommended that in addition to the
proposed CMS Web Interface used to submit quality measures, a
transactional Electronic Data Interchange (EDI) capability be developed
to achieve CMS' goal of permitting multiple methods for submission. The
commenters believed multiple technologies have benefits in different
situations for various stakeholders. The commenters also suggested that
the CMS Web Interface should also become usable by Medicaid, other
payers and purchasers on a voluntary basis.
Response: We thank the commenters for these suggestions and will
take them under consideration in the future as we continue implementing
the MIPS program.
Comment: Some commenters expressed concern with the proposal to
limit reporting through the CAHPS for MIPS survey and the CMS Web
Interface systems to groups of 25 clinicians or more. The commenters
expressed that small practices would benefit greatly from the use of
the CMS Web Interface, and limiting this option is a further burden
upon solo and small practices who often do not have the resources to
purchase more advanced health IT systems with more sophisticated
reporting capabilities. The commenters recommended that CMS look at
options that ensure solo and small practices have the same
opportunities to succeed as larger groups. Another commenter proposed
that CMS consider opening the CAHPS for MIPS survey reporting program
to all patient-facing MIPS eligible clinicians with the exception of
certain specialties such as psychiatry, addiction medicine, emergency
medicine, critical care, and hospitalists.
Response: The CAHPS for MIPS survey is available for all MIPS
groups. The CMS Web Interface has been limited to groups of 25 or
greater because smaller groups or individual MIPS eligible clinicians
have not been able to meet the data submission requirements on the
sample of the Medicare Part B patients we provide.
Comment: One commenter recommended that a transactional Electronic
Data Interchange (EDI) capability be developed so as to achieve CMS'
goal of permitting multiple methods for submission. The commenter
believed multiple technologies have benefits in different situations
for various stakeholders and the industry should do the hard work now
to support flexible technologies. The commenter also suggested that CMS
Web Interface should also become usable by Medicaid, other payers and
purchasers on a voluntary basis.
Response: We appreciate the suggestions and will take them into
consideration in future rulemaking.
After consideration of the comments regarding our proposal on
submission criteria for quality measures for groups reporting via the
CMS Web Interface, we are finalizing the policies as proposed.
Specifically, we are finalizing at Sec. 414.1335(a)(2) the following
criteria for the submission of data on quality measures by registered
groups of 25 or more MIPS eligible clinicians who want to report via
the CMS Web Interface. For the applicable 12-month performance period,
the group will be required to report on all measures included in the
CMS Web Interface completely, accurately, and timely by populating data
fields for the first 248 consecutively ranked and assigned Medicare
beneficiaries in the order in which they appear in the group's sample
for each module or measure. If the sample of eligible assigned
beneficiaries is less than 248, then the group will report on 100
percent of assigned beneficiaries. A group will be required to report
on at least one measure for which there is Medicare patient data.
Groups reporting via the CMS Web Interface are required to report on
all of the measures in the set. Any measures not reported will be
considered zero performance for that measure in our scoring algorithm.
We are finalizing our proposal to continue to align the 2019 CMS
Web Interface beneficiary assignment methodology with the measures that
used to be in the VM: The population quality measure discussed in the
proposed rule (81 FR 28188) and total per capita cost for all
attributed beneficiaries discussed in the proposed rule (81 FR 28196).
We are also finalizing our proposal to modify the attribution process
to update the definition of primary care services and to adapt the
attribution to different identifiers used in MIPS. These changes are
discussed in the proposed rule (81 FR 28196).
(iii) Performance Criteria for Quality Measures for Groups Electing
to Report Consumer Assessment of Healthcare Providers and Systems
(CAHPS) for MIPS Survey
The CAHPS for MIPS survey (formerly known as the CAHPS for PQRS
survey) consists of the core CAHPS Clinician & Group Survey developed
by Agency for Health Care Research (AHRQ), plus additional survey
questions to meet CMS's information and program needs. For more
information on the CAHPS for MIPS survey, please see the explanation of
the CAHPS for PQRS survey in the CY 2016 PFS final rule with comment
period (80 FR 71142 through 71143). While we anticipate that the CAHPS
for MIPS survey will closely align with the CAHPS for PQRS survey, we
may explore the possibility of updating the CAHPS for MIPS survey under
MIPS, specifically we may not finalize all proposed Summary Survey
Measures (SSM).
We proposed to allow registered groups to voluntarily elect to
participate in the CAHPS for MIPS survey. Specifically, we proposed at
Sec. 414.1335 the following criteria for the submission of data on the
CAHPS for MIPS survey by registered groups via CMS-approved survey
vendor: For the applicable 12-month performance period, the group must
have the CAHPS for MIPS survey reported on its behalf by a CMS-approved
survey vendor. In addition, the group will need to use another
submission mechanism (that is, qualified registries, QCDRs, EHR etc.)
to complete their quality data submission.
[[Page 77117]]
The CAHPS for MIPS survey would count as one cross-cutting and/or a
patient experience measure, and the group would be required to submit
at least five other measures through one other data submission
mechanisms. A group may report any five measures within MIPS plus the
CAHPS for MIPS survey to achieve the six measures threshold.
The administration of the CAHPS for MIPS survey would contain a 6-
month look-back period. In previous years the CAHPS for PQRS survey was
administered from November to February of the reporting year. We
proposed to retain the same survey administration period for the CAHPS
for MIPS survey. Groups that voluntarily elect to participate in the
CAHPS for MIPS survey would bear the cost of contracting with a CMS-
approved survey vendor to administer the CAHPS for MIPS survey on the
group's behalf, just as groups do now for the CAHPS for PQRS survey.
Under current provisions of PQRS, the CAHPS for PQRS survey is
required for groups of 100 or more eligible clinicians. Although we are
not requiring groups to participate in the CAHPS for MIPS survey, we do
still believe patient experience is important, and we therefore
proposed a scoring incentive for those groups who report the CAHPS for
MIPS survey. As described in the proposed rule (81 FR 28188), we
proposed that groups electing to report the CAHPS for MIPS survey,
would be required to register for the reporting of data. Because we
believe assessing patients' experiences as they interact with the
health care system is important, our proposed scoring methodology would
give bonus points for reporting CAHPS data (or other patient experience
measures). Please refer to the proposed rule (81 FR 28247), for further
details. We solicited comments on whether the CAHPS for MIPS survey
should be required for groups of 100 or more MIPS eligible clinicians
or whether it should be voluntary.
Currently, the CAHPS for PQRS beneficiary sample is based on
Medicare claims data. Therefore, only Medicare beneficiaries can be
selected to participate in the CAHPS for PQRS survey. In future years
of the MIPS program, we may consider expanding the potential patient
experience measures to all payers, so that Medicare and non-Medicare
patients can be included in the CAHPS for MIPS survey sample. We
solicited comments on criteria that would ensure comparable samples and
on these proposals.
The following is a summary of the comments we received regarding
our proposed performance criteria for quality measures for groups
electing to report the CAHPS for MIPS survey.
Comment: One commenter recommended that CMS should require MIPS
eligible clinicians in groups to report a standard patient experience
measure.
Response: We are not requiring groups to report the CAHPS for MIPS
survey for the transition year of MIPS. We are aware that requiring a
standard patient experience measure, such as the CAHPS for MIPS survey,
can be cost-prohibitive for small groups. However, we do believe
patient experience measures are important and are providing bonus
points for the CAHPS for MIPS survey, as discussed in section II.E.6.
of this final rule with comment period.
Comment: Some commenters requested clarification about whether the
CAHPS for MIPS survey would be required for groups of 100+ MIPS
eligible clinicians, as it was under PQRS. Some commenters opposed
mandatory CAHPS for MIPS survey reporting under MIPS and recommended
that CMS allow reporting on the CAHPS for MIPS survey to be voluntary.
Another commenter opposed making the CAHPS for MIPS survey a
requirement for large groups because it is a survey tool to measure
outpatient practices and is not useful for many facility based
practices. The commenter stated that there will be significant
confusion as large groups try to determine which parts of the survey
apply to them.
Response: We would like to explain that the CAHPS for MIPS survey
is optional for MIPS eligible clinician groups. We recognize that while
the CAHPS for MIPS survey is a standard tool used for large
organizations, we know that there are challenges with the CAHPS for
MIPS survey for certain specialty clinicians and clinicians who work in
certain settings.
Comment: A few commenters urged CMS to include the CAHPS for MIPS
survey, as well as other non-CAHPS experience of care and patient
reported outcomes measures and surveys (including those that are
offered by QCDRs), under the improvement activities performance
category rather than the quality performance category. One commenter
stated that the CAHPS for MIPS survey should be counted as a high
weight improvement activities. This commenter stated that this would
simplify the program and ensure that specialists have the same
opportunity as primary care clinicians to earn the maximum number of
points in the quality performance category. The commenter was concerned
that if CMS does not revise this proposal, specialists will be at a
disadvantage as the CAHPS for MIPS survey is less relevant for
specialists, especially surgeons, anesthesiologists, pathologists and
radiologists. If CMS moves forward with the proposed quality
requirements and bonus points for reporting on a patient experience
measure, the commenter requested that CMS clarify whether the CAHPS for
MIPS survey would automatically provide two bonus points or would count
as the one required high priority measure that all MIPS eligible
clinicians must report before bonus points are counted. The commenters
recommended ensuring specialists have the same opportunity as primary
care practices. Other commenters urged CMS to work closely with the
transplant community and the American College of Surgeons to adopt a
patient experience of care measure that is relevant to all surgeons,
including transplant surgeons, and that adequately takes into account
the team-based nature of transplantation and other complex surgery.
Response: We would like to explain for commenters that the CAHPS
for MIPS survey is included under the quality performance category, as
well as the improvement activities performance category as a high
weighted activity in the Patient Safety and Practice Assessment
subcategory noted in Table H of the Appendix in this final rule with
comment period. In addition, the CAHPS for MIPS survey measures
complement other measures of care quality by generating information
about aspects of care quality for which patients are the best or only
source of information, such as the degree to which care is respectful
and responsive to their needs (for example, ``patient-centered'');
therefore, these measures are well suited to the quality performance
category. We do recognize that certain specialties such as surgeons,
anesthesiologists, pathologists and radiologists that do not provide
primary care services may not have patients to whom the CAHPS for MIPS
survey could be issued and would therefore not be able to receive any
bonus points for patient experience. However, these specialties do have
the ability to earn bonus points for other high priority measures. We
agree with the commenters that ensuring all specialties have the
ability to earn full points for the quality performance category is
important. We believe that we have constructed the quality category in
a manner where this is true.
[[Page 77118]]
Comment: Other commenters encouraged CMS to require for all MIPS
eligible clinicians in groups to report the CAHPS for MIPS survey. One
commenter suggested these CAHPS for MIPS survey measures transcend the
core survey and include questions from the Cultural Competence
supplement and the Health IT supplement. Another commenter was very
concerned that the CAHPS for MIPS survey was optional under MIPS. They
stated that the CAHPS for MIPS survey is the only standardized,
validated tool available in the public domain to capture information
about the experience of care from a patient's perspective. The
commenter requested that CMS finalize this as a mandatory reporting
requirement for groups of 100 or more. In addition, the commenter
further requested that CMS consider developing an easier-to-administer
version in the future. Another commenter stated that CMS should
encourage the development and use of PROMs. Other commenters requested
that CMS reconsider mandating the participation for practice groups of
a certain size, such as 50 MIPS eligible clinicians.
Response: We do not believe making the CAHPS for MIPS survey
mandatory to be an appropriate policy at this time, but we will
consider doing so for future MIPS performance years. Rather as we have
indicated at the onset of this rule, we are removing as many barriers
from participation as possible to encourage clinicians to participate
in the MIPS. We are mindful of the reporting burden and expense
associated with patient reported measures such as CAHPS for MIPS and do
not want to add a cost or reporting burden to clinicians who prefer to
choose other measures. We also believe that by providing bonus points
for patient experience surveys, we believe that we are still able to
emphasize that patient experience is an important component of quality
measurement and improvement. We also appreciate the request to consider
developing an easier to administer version and will take into
consideration in the future.
Comment: Other commenters urged CMS to continue exclusion of
pathologists, as non-patient facing, from selection as ``focal
providers'' about whom the CAHPS for MIPS survey asks.
Response: We thank the commenters for their feedback on non-patient
facing MIPS eligible clinicians and the CAHPS for MIPS survey. We agree
that non-patient facing MIPS eligible clinicians should not be
considered the clinician named in the survey who provided the
beneficiary with the majority of the primary care services delivered by
the group practice, that is, the ``focal provider'' for that survey.
Comment: Several commenters supported CMS' proposal to no longer
require that larger practices report on patient experience, explaining
that, historically, this measure was not intended to target emergency
clinicians, yet larger emergency practices were still required to go
through the time and expense of contracting with a certified survey
vendor before finding out whether they were exempt from the
requirement. Another commenter supported voluntary reporting of the
CAHPS for MIPS survey. The commenter stated the CAHPS for MIPS survey
is too long and generates low response rates. The commenter urged CMS
to work with MIPS eligible clinicians, AHRQ, CAHPS stewards, and other
stakeholders to develop means for obtaining patient experience data. A
few commenters stated that many MIPS eligible clinicians survey their
patients' satisfaction in a variety of patient care areas, and these
surveys are often electronic and allow timely submission of feedback
that is valuable to the overall patient care experience. The commenters
suggested that CMS consider allowing MIPS eligible clinicians to survey
their patients through alternative surveys.
Response: We thank the commenters for this feedback and acknowledge
that there may be other potential survey methods. However, the CAHPS
for MIPS survey is the only survey instrument with robust evidence
support demonstrating a beneficial impact on quality. For a program of
this scale that also has payment implications, we believe the CAHPS for
MIPS survey is the most appropriate survey to utilize.
Comment: Some commenters stated that small practices cannot afford
to pay vendors to obtain the CAHPS for MIPS survey information for
bonus points.
Response: We would like to explain that the CAHPS for MIPS survey
is optional for all MIPS eligible clinician groups, and that there are
other ways to obtain bonus points, such as by reporting additional
outcome measures.
Comment: Other commenters encouraged CMS to invest resources in
evolving CAHPS instruments--or creating new tools--to be more
meaningful to consumers, more efficient and less costly to administer
and collect, and better able to supply clinicians with real-time
feedback for practice improvement. The commenters would like this to
include continuing research and implementation efforts to combine
patient experience survey scores with narrative questions.
Response: We will take under advisement for future rulemaking.
Comment: Another commenter supported the proposal to use all-payer
data for quality measures and patient experience surveys. The commenter
supported stratification by demographic characteristics to the degree
that such stratification is feasible and appropriate and thinks CMS
should make this data publicly available at the individual and practice
level.
Response: We thank the commenter for their support. We will take
this recommendation into consideration for future rulemaking.
Comment: A few commenters stated that the potential expansion of
the CAHPS for MIPS survey to all-payer data should be optional, as this
could make the survey more costly and lead to it being unaffordable to
those who use it in its current form. Other commenters recommended that
CMS expand the CAHPS for MIPS patient sample and survey process to
include additional payers, in a process similar to that used by the
HCAHPS, Hospice CAHPS, and the Outpatient and Ambulatory Surgery CAHPS
surveys.
Response: As we continue to evaluate the inclusion of all-payer
data as part of the CAHPS for MIPS survey, we will consider the impact
of implementation as well as viable options.
Comment: One commenter was concerned about the patient satisfaction
surveys, particularly in the context of team-based care delivery. The
commenter noted that individual scoring of patient satisfaction is
prone to misassignment of both good and bad quality. Another commenter
expressed concern about the numerous patient surveys because, although
patient feedback is important, this feedback must be balanced by
acknowledging limitations to these surveys. The commenter mentioned
that selection bias and survey fatigue may become a problem. Another
commenter questioned whether the CAHPS for MIPS survey was an accurate
reflection of the quality of care patients received, or whether it
might be biased by superficial factors. The commenter also questioned
the surveys statistical validity. The commenter encouraged CMS to
explore alternative means of capturing patient experience, which is
different from patient satisfaction.
Response: The CAHPS for MIPS survey is optional for groups.
However, because we believe assessing patients' experiences as they
interact with the health care system is important, our proposed scoring
methodology would give bonus points for reporting CAHPS data (or other
patient experience
[[Page 77119]]
measures). In addition, while patient experience may not always be
associated with health outcomes, there is some evidence of a
correlation between higher scores on patient experience surveys and
better health outcomes. Please refer to http://www.ahrq.gov/cahps/consumer-reporting/research/index.html for more information on AHRQ
studies pertaining to patient experience survey and better health
outcomes.
Comment: Another commenter stated that the CAHPS for MIPS survey
should modify its wording to reflect that much work is done by a ``care
team'' rather than a ``clinician.''
Response: We thank the commenter for this feedback, which we will
take into consideration for future rulemaking.
Comment: Some commenters believed that the CAHPS for MIPS survey
should count for three measures, including one cross-cutting and one
patient experience measure, noting that in the past, CMS has counted
the CAHPS for PQRS survey as three measures covering one NQS domain.
Another commenter encouraged CMS to require that MIPS eligible
clinicians reporting CAHPS still submit an outcome measure, if one is
available.
Response: We recognize that under the PQRS program, CAHPS surveys
counted as three quality measures rather than one quality measure. To
simplify our scoring and communications we are only counting the CAHPS
for MIPS survey as one measure. We do note, however, that the CAHPS for
MIPS survey would fulfill the requirement to report on a high priority
measure, in those instances when MIPS eligible clinicians do not have
an outcome measure available.
Comment: Other commenters believed that the CAHPS for MIPS survey
is not designed for and is inappropriate for skilled nursing facility
based MIPS eligible clinicians because in many situations the source of
the information is not reliable due to the mental status of the
patients being surveyed. Therefore, the commenters opposed applying
bonuses and/or mandatory requirements to use such surveys in the
quality performance category of MIPS until such surveys are available
for MIPS eligible clinicians practicing in all settings of care.
Response: To ensure meaningful measurement of patient experiences,
we plan to include the CAHPS for MIPS survey as one way to earn bonus
points since we believe this survey is important and appropriate for
the Quality Payment Program. However, we would like to explain that the
CAHPS for MIPS survey is optional for all MIPS eligible clinician
groups, and that there are other ways for skilled nursing facilities to
obtain bonus points, such as by reporting additional outcome measures
or other high priority measures. We encourage stakeholders who are
concerned about a lack of high priority measures to consider
development of these measures and submit them for future use within the
program. In addition, our strategy for identifying and developing
meaningful outcome measures are in the quality measure development
plan, authorized by section 102 of the MACRA (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). The plan references
how we plan to consider evidence-based research, risk adjustment, and
other factors to develop better outcome measures.
Comment: Some commenters urged CMS to work with other stakeholders
to improve upon the CAHPS for MIPS survey and/or develop additional
tools for measuring patient experience. The commenters also encouraged
CMS to consider ways to make the CAHPS for MIPS survey easier for
patients to complete, including different options for how it is
administered and employing skip logic to reduce its redundancy, and to
make it more meaningful to clinicians, such as by disaggregating by
different types of patients. Other commenters recommended that CMS
consider having MIPS eligible clinicians report the CAHPS for MIPS
survey using an electronic administration of the instrument because
such tools would be more efficient for administering the survey and
would offer MIPS eligible clinicians real-time feedback for practice
improvement. A few commenters recommended that CMS use short-form
surveys, electronic administration, and alternative instrument as a
means to reduce the burden of surveying while improving utility to
patients and MIPS eligible clinicians.
Response: We are exploring potential options available for the
CAHPS for MIPS administration, including electronic modes of
administration, for the future.
Comment: One commenter requested that clinicians have the option to
use other patient satisfaction surveys, such as the surgical CAHPS
survey.
Response: We thank the commenter for the suggestion and note that
QCDRs would have the option to include the surgical CAHPS survey as one
of their non-MIPS measures, if they so choose. We will however take
this comment into consideration for future rulemaking.
Comment: Another commenter recommended that CMS evaluate the CAHPS
for MIPS survey and remove summary survey measures (SSMs) which make
the survey less relevant for MIPS eligible clinicians and groups which
are not delivering primary services, such as the ``Access to
Specialists'' SSM, as the subsequent survey would be widely applicable
to a large number of patient[hyphen]facing MIPS eligible clinicians.
Response: We thank the commenter for the suggestion. We will
continue to explore potential improvements to the CAHPS for MIPS survey
in the future.
Comment: Some commenters opposed implementing the changes to the
Clinician and Group survey items that AHRQ has released as
CG[hyphen]CAHPS 3.0, as a recent memorandum released by AHRQ indicates
that the changes resulted in increased scores caused by the removal of
low scoring questions and not an improvement in the experience of
beneficiaries. A few commenters supported retaining lower performing
CAHPS for MIPS questions as supplemental questions.
Response: We appreciate the interest in retaining survey items that
AHRQ has removed from version 3.0 of CG-CAHPS, and will take that
interest into consideration as we finalize the survey implementation,
scoring, and benchmarking procedures for CAHPS for MIPS. It is
important to note that CAHPS for MIPS will include content in addition
to CG-CAHPS core items, including but not limited to shared decision-
making, access to specialist care, and health promotion and education.
Comment: Other commenters recommended that the CAHPS for MIPS
surveys be conducted closer to the time of a patient-clinician
encounter to improve recall.
Response: We will consider the commenter's recommendations in
future rulemaking.
Comment: One commenter requested that CMS limit additional CAHPS
for MIPS questions and that the CAHPS for MIPS survey either remain the
same as for PQRS or that the questions remain stable for the first few
program years.
Response: For the transition year of MIPS, the CAHPS for MIPS
survey will primarily be the same as the current CAHPS for PQRS survey;
however, as noted the survey contains additional questions to meet
CMS's program needs. We would like to note that there may be updates
made in regards to those questions that meet CMS's information
[[Page 77120]]
and program needs. Further, we would like to note that in future years
we do anticipate that we will revise the CAHPS for MIPS survey. We
anticipate these revisions will not only improve the survey, but reduce
burden.
Comment: Another commenter requested clarification on how CMS can
ensure the data are reliable to drive improvement when CAHPS for MIPS
survey response rates are declining.
Response: Response rates to CAHPS for PQRS (the precursor to CAHPS
for MIPS) are comparable to those of other surveys of patient care
experiences. Under CAHPS for MIPS, we will adjust reported scores for
case mix, which allows the performance of groups to be compared against
the same case mix of patients. Studies have not found evidence that
response rates bias comparisons of case-mix adjusted patient experience
scores.
Comment: Some commenters recommended raising the threshold for the
minimum number of patient CAHPS for MIPS survey responses to 30 to
increase reliability.
Response: We will consider the commenter's recommendations in
future rulemaking.
Comment: One commenter encouraged CMS to consider expanding the use
of CAHPS for all clinicians as a tool in the quality measurement
category of MIPS, with appropriate exclusions for rural and non-patient
facing MIPS eligible clinicians. Additionally, the commenter encouraged
CMS to expand the target population for such surveys to include the
families of patients who have died, and to adapt questions from the
hospice instrument so they can be used in CAHPS surveys of other
settings to assess palliative care eligible clinicians and eligible
clinicians who treat the patients facing the end of life in other
settings other than hospice.
Response: We appreciate the recommendation and will continue to
look at ways to expand the CAHPS survey.
After consideration of the comments regarding our proposed
performance criteria for quality measures for groups electing to report
the CAHPS for MIPS survey we are finalizing the policies as proposed.
Specifically, we are finalizing at Sec. 414.1335(a)(3) the following
criteria for the submission of data on the CAHPS for MIPS survey by
registered groups via CMS-approved survey vendor: For the applicable
12-month performance period, a group that wishes to voluntarily elect
to participate in the CAHPS for MIPS survey measures must use a survey
vendor that is approved by CMS for a particular performance period to
transmit survey measures data to CMS. The CAHPS for MIPS survey counts
for one measure towards the MIPS quality performance category and, as a
patient experience measure, also fulfills the requirement to report at
least one high priority measure in the absence of an applicable outcome
measure. In addition, groups that elect this data submission mechanism
must select an additional group data submission mechanism (that is,
qualified registries, QCDRs, EHR etc.) in order to meet the data
submission criteria for the MIPS quality performance category. The
CAHPS for MIPS survey will count as one patient experience measure, and
the group will be required to submit at least five other measures
through one other data submission mechanisms. A group may report any
five measures within MIPS plus the CAHPS for MIPS survey to achieve the
six measures threshold. We will retain the survey administration period
for the CAHPS for MIPS survey November to February. Groups that
voluntarily elect to participate in the CAHPS for MIPS survey will bear
the cost of contracting with a CMS-approved survey vendor to administer
the CAHPS for MIPS survey on the group's behalf. Groups electing to
report the CAHPS for MIPS survey will be required to register for the
reporting of data. Only Medicare beneficiaries can be selected to
participate in the CAHPS for MIPS survey.
(b) Data Completeness Criteria
We want to ensure that data submitted on quality measures are
complete enough to accurately assess each MIPS eligible clinician's
quality performance. Section 1848(q)(5)(H) of the Act provides that
analysis of the quality performance category may include quality
measure data from other payers, specifically, data submitted by MIPS
eligible clinicians with respect to items and services furnished to
individuals who are not individuals entitled to benefits under Part A
or enrolled under Part B of Medicare.
To ensure completeness for the broadest group of patients, we
proposed at Sec. 414.1340 the criteria below. MIPS eligible clinicians
and groups who do not meet the proposed reporting criteria noted below
would fail the quality component of MIPS.
Individual MIPS eligible clinicians or groups submitting
data on quality measures using QCDRs, qualified registries, or via EHR
need to report on at least 90 percent of the MIPS eligible clinician or
group's patients that meet the measure's denominator criteria,
regardless of payer for the performance period. In other words, for
these submission mechanisms, we would expect to receive quality data
for both Medicare and non-Medicare patients.
Individual MIPS eligible clinicians submitting data on
quality measures data using Medicare Part B claims would report on at
least 80 percent of the Medicare Part B patients seen during the
performance period to which the measure applies.
Groups submitting quality measures data using the CMS Web
Interface or a CMS-approved survey vendor to report the CAHPS for MIPS
survey would need to meet the data submission requirements on the
sample of the Medicare Part B patients CMS provides.
We proposed to include all-payer data for the QCDR, qualified
registry, and EHR submission mechanisms because we believe this
approach provides a more complete picture of each MIPS eligible
clinicians scope of practice and provides more access to data about
specialties and subspecialties not currently captured in PQRS. In
addition, we proposed the QCDR, qualified registry, or EHR submission
must contain a minimum of one quality measure for at least one Medicare
patient.
We desire all-payer data for all reporting mechanisms, yet certain
reporting mechanisms are limited to Medicare Part B data. Specifically,
the claims reporting mechanism relies on individual MIPS eligible
clinicians attaching quality information on Medicare Part B claims;
therefore only Medicare Part B patients can be reported by this
mechanism. The CMS Web Interface and the CAHPS for MIPS survey
currently rely on sampling protocols based on Medicare Part B billing;
therefore, only Medicare Part B beneficiaries are sampled through that
methodology. We welcomed comments on ways to modify the methodology to
assign and sample patients for these mechanisms using data from other
payers.
The data completeness criteria we proposed are an increase in the
percentage of patients to be reported by each of the mechanisms when
compared to PQRS. We believe the proposed thresholds are appropriate to
ensure a more accurate assessment of a MIPS eligible clinician's
performance on the quality measures and to avoid any selection bias
that may exist under the current PQRS requirements. In addition, we
would like to align all the reporting mechanisms as closely as possible
with achievable data completeness criteria. We intend to continually
assess the proposed data completeness criteria and will consider
increasing these
[[Page 77121]]
thresholds for future years of the program. We requested comments on
this proposal.
We were also interested in data that would indicate these data
completeness criteria are inappropriate. For example, we could envision
that reporting a cross-cutting measure would not always be appropriate
for every telehealth service or for certain acute situations. We would
not want a MIPS eligible clinician to fail reporting the measure in
appropriate circumstances; therefore, we solicited feedback data and
circumstances where it would be appropriate to lower the data
completeness criteria.
The following is summary of the comments we received regarding our
proposed data completeness criteria.
Comment: The majority of commenters recommended that CMS reduce the
quality reporting thresholds to 50 percent, and not proceed with the
proposals to increase the threshold for successfully reporting a
measure to 80 percent via claims, and 90 percent via EHR, clinical
registry, QCDR, or CMS Web Interface. The commenters cited numerous
concerns and justifications for a modified threshold including: The 50
percent reporting rate allows those MIPS eligible clinicians just
starting to report a quicker pathway to success and to gain familiarity
with the program before such a high threshold is established, an
advanced announcement of an increased threshold through future
rulemaking provides those MIPS eligible clinicians already reporting
sufficient time to implement changes to their practice to meet the
higher threshold, and the proposed thresholds would present a
significant administrative burden and make higher quality scores
difficult to achieve. These commenters believed a majority of MIPS
eligible clinicians would struggle to meet the proposed threshold of 90
percent and that the threshold is unrealistic. Another commenter
opposed CMS's proposal to increase the reporting thresholds because
this leaves MIPS eligible clinicians and third party data submission
vendors with very little room for expected error.
Response: We thank the commenters for their detailed feedback.
Based on the overwhelming feedback received, we do not intend to
finalize the data completeness thresholds as proposed. The numerous
details the commenters cited on the increased burden the data
completeness thresholds will impose on MIPS eligible clinicians is not
intended. We agree with the commenters that some of the unintended
consequences of having a higher data completeness threshold may
jeopardize the MIPS eligible clinician's ability to participate and
perform well under the MIPS. We want to ensure that an appropriate yet
achievable level of data completeness is applied to all MIPS eligible
clinicians. Based on stakeholder feedback, for the transition year of
MIPS, we will finalize a 50 percent data completeness threshold for
claims, registry, QCDR, and EHR submission mechanisms. This threshold
is consistent with the current PQRS program. Additionally, for the
second year of MIPS, for performance periods occurring in 2018, we are
finalizing a 60 percent data completeness threshold for claims,
registry, QCDR, and EHR submission mechanisms. We believe it is
important to incorporate higher thresholds in future years to ensure a
more accurate assessment of a MIPS eligible clinician's performance on
the quality measures and to avoid any selection bias. We also believe
that we are providing ample notice to MIPS eligible clinicians so they
can take the necessary steps to prepare for this higher threshold for
MIPS payment year 2020. Lastly, we anticipate that, in the 2021 MIPS
payment year and beyond, for performance periods occurring in 2019
forward, as MIPS eligible clinicians gain experience with the MIPS we
would further increase these thresholds over time.
Comment: Another commenter cited specific concerns for QCDRs. The
commenter believed the 50 percent threshold for QCDRs to report should
be maintained for reporting and data completeness because of the
proposed changes to QCDR functionality such as reporting additional
performance categories and requiring MIPS eligible clinician feedback
at least six times a year. Another commenter stated that the rule needs
to maximize the role of QCDRs to ensure reporting and data submission
are flexible, meaningful, and useful. The proposed QCDR requirement
increasing from 50 to 90 percent will require reassuring MIPS eligible
clinicians of the value of QCDR participation and reporting.
Response: We appreciate the commenters concerns and as mentioned
previously we are modifying the data completeness threshold for
individual MIPS eligible clinicians and groups submitting data on
quality measures using QCDRs. For the transition year, the MIPS
eligible clinician will need to report on at least 50 percent of the
MIPS eligible clinician or group's patients that meet the measure's
denominator criteria, regardless of payer for the performance period.
We do note that for the second year of MIPS, for performance periods
occurring in 2018, we are increasing the data completeness threshold to
60 percent. We also anticipate, that in the third and future years of
MIPS, for performance periods occurring in 2019 and forward, as MIPS
eligible clinicians gain experience with the MIPS we would further
increase these thresholds over time. Lastly, we also want to refer the
commenter to section II.E.9.a. of this final rule with comment period
where we discuss the requirements to become a QCDR under the MIPS.
Comment: Another commenter stated that setting a data completeness
threshold of 80 or 90 percent is not achievable for practices,
especially given struggles trying to meet the requirement for reporting
measures for 50 percent of Medicare patients under PQRS. The commenter
expressed disappointment that average reporting threshold rates from
2014 PQRS Experience Report were not disclosed. The 80 or 90 percent
requirement creates additional burden as well given inclusion of all-
payer data requirement. The commenter also believed that vendors will
not be able to meet these more stringent requirements, especially for
first performance period. The commenter urged CMS to reduce data
completeness threshold to 50 percent of applicable Medicare Part B
beneficiary encounters via claims and 50 percent for reporting via
registry, EHR and QCDR.
Response: As noted above, for the transition year of MIPS, we will
finalize a 50 percent data completeness threshold for claims, registry,
QCDR, and EHR submission mechanisms. This threshold is consistent with
the current PQRS program. While we can appreciate the concern raised by
the commenter related to vendors' readiness, we do not anticipate that
vendors will have difficulty in meeting the original proposed data
completeness threshold or the modified data completeness threshold we
are finalizing here. Lastly, we will include the average reporting
threshold rates for future years of the PQRS Experience Report, as
technically feasible.
Comment: Another commenter urged CMS to apply consistent data
reporting requirements regardless of the method of data submission, as
the commenter disagreed with different measure submission requirements
for clinicians using a QCDR, qualified registry, or EHR. The commenter
stated this consistency would allow for fair comparisons among
clinicians.
Response: We agree with the commenter and would like to explain
that we did not propose different data completeness threshold nor are
we finalizing different data completeness
[[Page 77122]]
thresholds across the QCDR, qualified registry, or EHR submission
mechanisms.
Comment: Another commenter stated it is necessary to maintain a 50
percent threshold until a certain level of interoperability for data
exchange across registries, EHRs and other data sources has been
achieved. This commenter believed that claims reporting is the most
burdensome for MIPS eligible clinicians as quality data codes (QDCs)
will need to be attached for each applicable claim.
Response: As noted above we are finalizing a 50 percent data
completeness threshold for the transition year of MIPS. However, we do
not agree that we can remain at a 50 percent threshold until
interoperability is achieved. Rather we believe by providing ample
notice to MIPS eligible clinicians and third party intermediaries, we
can increase the thresholds over time. It is important to note that for
the second year of MIPS, for performance periods occurring in 2018, we
are increasing the data completeness threshold to 60 percent. We also
anticipate, that for performance periods occurring in 2019 and forward,
as MIPS eligible clinicians gain experience with the MIPS we would
further increase these thresholds over time. Lastly, we recognize that
the differing submission mechanisms have varying levels of burden on
the MIPS eligible clinicians, which is why we believe that having
multiple submission mechanisms as options is an important component as
clinicians gain experience with the MIPS.
Comment: Other commenters recommended a 50 percent threshold to
ensure quality performance category scoring does not favor large
practices. The commenters were concerned that CMS' proposed scoring
favors large practices that submit data through the CMS Web Interface.
The commenters noted that MIPS eligible clinicians using CMS Web
Interface to submit data automatically achieve all of the requirements
(plus bonus points) to potentially earn maximum points, and only need
to report on a sampling of patients rather than the high percentage of
patients needed for other data submission methods, and that this
provides an advantage for these MIPS eligible clinicians over MIPS
eligible clinicians in smaller practices.
Response: While we do not agree that the MIPS quality scoring
methodologies favor large practices that submit data using the CMS Web
Interface, we can agree that small practices may require additional
flexibilities under the MIPS. Therefore, as noted previously, we are
finalizing flexibilities for smaller practices throughout this final
rule with comment period, such as reduced improvement activities
requirements.
Comment: A few commenters indicated that the proposed thresholds
would create an environment with little room for error, does not
account for potential vendor, administrative or other problems, and
will jeopardize MIPS eligible clinicians' success. These commenters
noted that MIPS eligible clinicians may be deterred from reporting high
priority and outcome measures and from reporting via electronic means
due to the administrative burden posed by the high thresholds. The
commenters stated that a 50 percent threshold still requires MIPS
eligible clinicians to report on a majority of patients, and that this
threshold does not encourage ``gaming'': Once MIPS eligible clinician
workflows are in place, it is onerous to deviate from them simply to
pick and choose which patients to include in which measure. The
commenter stated that the higher threshold is especially burdensome for
small practices without the resources to hire a full-time or part-time
employee to collect and document such information.
Response: We did not intend to increase the burden on MIPS eligible
clinicians or deter MIPS eligible clinicians from submitting data on
high priority measures. While we can agree with the commenters that
modifying existing clinical workflows can be burdensome, we believe
that once these workflows are established, performing the quality
actions for the denominator eligible patients becomes part of the
clinical workflow and is not unduly burdensome. For the transition year
of MIPS, we will finalize a 50 percent data completeness threshold for
claims, registry, QCDR, and EHR submission mechanisms. This threshold
is consistent with the current PQRS program. Additionally, for the
second year of MIPS, for performance periods occurring in 2018, we are
finalizing a 60 percent data completeness threshold for claims,
registry, QCDR, and EHR submission mechanisms. We believe it is
important to incorporate higher thresholds in future years to ensure a
more accurate assessment of a MIPS eligible clinician's performance on
the quality measures and to avoid any selection bias. We also believe
that we are providing ample notice to MIPS eligible clinicians so they
can take the necessary steps to prepare for this higher threshold in
the second year of the MIPS. We anticipate that, for performance
periods occurring in 2019 and forward, as MIPS eligible clinicians gain
experience with the MIPS we would further increase these thresholds
over time.
Comment: Another commenter stated the reporting requirement of at
least 90 percent of all patients (not just Medicare) is not possible
and that this is equivalent to requiring MIPS eligible clinicians to
report on more than six individual quality measures and is a
substantial change from the 20 patient requirement for measures groups
under the current PQRS rule. The commenter's stated that their group
performs thousands of general and vascular surgeries each year and that
devoting the time and cost to review every hospital chart, operative
note and call every patient at least once 30 days post operation simply
is not possible. Another commenter stated that the data completeness
criteria are onerous and require MIPS eligible clinicians to report on
such a high percentage of their patients limits the types of measures
physicians will be able to report (for example, MIPS eligible
clinicians will prefer non-resource-intensive outcome measures).
Response: We appreciate the commenters concerns and did not intend
for the data completeness thresholds to limit the types of patients
MIPS eligible clinicians would submit data on. We are finalizing a 50
percent threshold for the transition year, and a 60 percent threshold
for the second year of the MIPS, for performance periods occurring in
2018. We do believe, however, it is important to incorporate higher
thresholds in future years to ensure a more accurate assessment of a
MIPS eligible clinician's performance on the quality measures and to
avoid any selection bias. We also believe that we are providing ample
notice to MIPS eligible clinicians so they can take the necessary steps
to prepare for this higher threshold in the second year of the MIPS. We
anticipate that, for performance periods occurring in 2019 and forward,
as MIPS eligible clinicians gain experience with the MIPS we would
further increase these thresholds over time. We will however monitor
these policies to ensure that these data completeness thresholds do not
become overly burdensome that they deter MIPS eligible clinicians from
submitting data on their appropriate patient population.
Comment: One commenter, a small mental health clinic, cited
numerous reasons for concern including clients not tolerating
significant time to ask assessment questions, difficulty in finding
applicable measures, medical staff's limited time with clients,
difficulty in getting measures from clients seen in their homes,
clinical
[[Page 77123]]
inappropriateness of spending entire first or second appointments
gathering PQRS measures, issues with PHQ9 score improvement, and other
reporting requirements including California's Medi-Cal and Mental
Health Service Act requirements. The commenter suggested the continued
use of the 50 percent reporting requirement under PQRS.
Response: We can appreciate the concerns raised by the commenter.
We are continuing to use a 50 percent data completeness threshold
similar to what was used under PQRS. We do note however that under MIPS
the data completeness threshold applies for both Medicare and non-
Medicare patients.
Comment: One commenter also requested that CMS release data
demonstrating that raising the reporting rate is feasible for all MIPS
eligible clinicians. This commenter noted the 2017 and 2018 PQRS and
VBPM policies required 50 percent completeness and was a decrease from
previous years, acknowledging feedback from clinicians. The commenter
stated that issuing a drastic increase as clinicians shift to a new
system will be problematic, and the commenter suggested remaining at 50
percent for the first few years and consider phasing in increases if it
is found that 50 percent is feasible.
Response: We thank the commenters for their detailed feedback.
Based on the overwhelming feedback received, we do not intend to
finalize the data completeness thresholds as proposed. The numerous
details the commenters cited on the increased burden the data
completeness thresholds will impose on clinicians is not intended. We
want to ensure that an appropriate yet achievable level of data
completeness is applied to all MIPS eligible clinicians. Based on
stakeholder feedback for the transition year of MIPS, we will finalize
a 50 percent data completeness threshold for claims, registry, QCDR,
and EHR submission mechanisms. This threshold is consistent with the
current PQRS program. However, we continue to target a 90 percent
reporting requirement as MIPS eligible clinicians gain experience with
the MIPS we would further increase these thresholds over time.
Comment: Another commenter agreed with the proposal to include at
least 90 percent of patients regardless of payer to CMS in order to
provide the most complete picture of the MIPS eligible clinician's
quality, especially for specialists.
Response: We thank the commenter for their support. However, based
on stakeholder feedback, for the transition year of MIPS, we will
finalize a 50 percent data completeness threshold for claims, registry,
QCDR, and EHR submission mechanisms.
Comment: A few commenters believed that a 100 percent review is not
feasible because their practice performs 10,000 procedures annually.
The commenters believed that review of 25-30 procedures is more
practical.
Response: Based on the overwhelming feedback received, we do not
intend to finalize the data completeness thresholds as proposed. The
numerous details the commenters cited on the increased burden the data
completeness thresholds will impose on MIPS eligible clinicians is not
intended. We want to ensure that an appropriate yet achievable level of
data completeness is applied to all MIPS eligible clinicians. After
consideration of stakeholder feedback, for the transition year of MIPS,
we are modifying our proposal and will finalize a 50 percent data
completeness threshold for claims, registry, QCDR, and EHR submission
mechanisms.
Comment: Other commenters requested that CMS consider using other
reporting options that do not involve collecting data from a certain
percentage of patients, such as requiring clinicians to report on a
certain number of consecutive patients. The commenters believed the
consecutive case approach could minimize the reporting burden while
allowing for the collection of information to assess performance.
Response: In the early years of PQRS we required EPs to report on a
certain number of consecutive patients if the clinician was reporting a
measures group. Our experience was that many EPs failed to meet the
reporting requirements as they missed one or more patients in the
consecutive sequence.
Comment: A few commenters supported the proposal to give scores of
zero if MIPS eligible clinicians can, but fail to, report on the
minimum number of measures.
Response: We thank the commenter for their support of our proposal.
Comment: Another commenter supported CMS's proposal in the quality
performance category to recognize a measure as being submitted and not
assign a clinic zero points for a non-reported measure when a measure's
reliability or validity may be compromised due to unforeseen
circumstances, such as data collection problems. The commenter
recommended that CMS notify affected MIPS eligible clinicians and
groups by mail if in the future a data collection or vendor submission
issue arises.
Response: We intend to make every effort to notify affected MIPS
eligible clinicians if data collection issues arise.
Comment: Many commenters disagreed with the proposal to include
all-payer data. . Several commenters believed that requiring MIPS
eligible clinicians to report all-payer data goes beyond the scope of
CMS's programmatic authority and need, violates clinicians' ethical
duties to patient confidentiality, and violates patients' privacy
rights.' Other commenters stated the federal government should not be
able to access the medical information of patients who are not CMS
beneficiaries. Another commenter believed that MIPS eligible clinicians
may be discouraged from reporting through registries, QCDRs, and EHRs
due to the requirement that they report on all of their patients
regardless of payer. One commenter urged CMS to remove the requirement
to report all patients when reporting via registry.
Another commenter noted that MIPS eligible clinicians reporting
outcomes should document all factors affecting outcomes, especially
adversely affecting outcomes. The commenter stated that socioeconomic
status, family support systems, cognitive dysfunction and mental health
issues affect compliance and outcomes. Therefore, coding for some of
these factors can be misleading, even if there are available options
for diagnostic coding. The commenter noted that open access to all
physician notes would jeopardize proper documentation of these issues.
The commenter added that diagnostic coding must not inhibit
documentation of issues and concerns for physicians, and that there
must be proper acuity adjustment in measuring physician or team
performance. The commenter suggested that all charts have certain areas
of restricted protected access to allow documentation of such issues,
and that this type of charting must be available to physicians who are
not categorized as mental health professionals.
Response: We have received numerous previous comments noting that
it can be difficult for clinicians to separate Medicare beneficiaries
from other patients, and our intention with seeking all-payer data is
to make reporting easier for MIPS eligible clinicians. We note that
section 1848(q)(5)(H) of the Act authorizes the Secretary to include,
for purposes of the quality performance category, data submitted by
MIPS eligible clinicians with respect to items and services furnished
to individuals who are not Medicare beneficiaries. Furthermore, we
believe that all-payer data makes it
[[Page 77124]]
easier for MIPS eligible clinicians to obtain a complete view of their
quality performance without focusing on one subset or another of their
patient populations. We do not believe that collection of this data
constitutes a violation of patient privacy. We do not believe that the
collection of all-payer data will decrease MIPS eligible clinicians'
utilization of registries, QCDRs, and EHRs. It is important to note
that MIPS eligible clinicians may elect to report information at the
aggregate level which does not have any patient-identifiable
information. We agree that documentation related to outcomes is
challenging and we continue to work to identify the impact of socio-
demographic status on patient outcomes.
Comment: Other commenters supported the proposal to use all-payer
data for quality measures and also for patient experience surveys,
recognizing that these data will create a more comprehensive picture of
a MIPS eligible clinician's performance. Another commenter was
supportive of the proposal to require MIPS eligible clinicians
reporting quality data via qualified registries or EHR to report on
both Medicare and non-Medicare patients. The commenter favored the
proposal because it would be administratively easier and because
quality of care affects all patients, not just those covered by
Medicare.
Response: We thank the commenters for the support.
Comment: A few commenters recommended that CMS phase-in the
requirement to include all-payer data for the QCDR, qualified registry,
and EHR submission mechanisms and suggests that for year 1 of the
program, requiring only Medicare data would be a more appropriate first
step.
Response: Third party intermediaries were required to utilize all
payer data in PQRS. Therefore, we do not believe it should be a burden
as they have already been meeting this requirement.
Comment: Other commenters asked whether reporting all-payer data is
optional year 1 of the program, whether there is a minimum percentage
of Medicare Part B patients required, where the benchmarks will come
from, and how it will be ensured that the benchmarks are comparable
across the industry. Some commenters recommended that reporting on
other payers be optional and that MIPS eligible clinicians not be
penalized for activities related to payers other than Medicare. The
commenters stated that the law does not require reporting data on other
payers' patients. The commenters believed that reporting on all payers
may skew data in favor of MIPS eligible clinicians with large private
payer populations over physicians with large Medicare patient
populations. A few commenters expressed concern that some practices
will be required to submit data that represents all payers because
Medicare populations are very different from those covered by other
payers. This may create an inequitable assessment of quality
performance.
Response: We would like to explain that reporting all-payer data is
not optional for the transition year of MIPS. We desire all-payer data
for all reporting mechanisms, yet certain reporting mechanisms are
limited to Medicare Part B data. Specifically, the claims reporting
mechanism relies on individual MIPS eligible clinicians attaching
quality information on Medicare Part B claims; therefore, only Medicare
Part B patients can be reported by this mechanism. The CMS Web
Interface and the CAHPS for MIPS survey currently rely on sampling
protocols based on Medicare Part B billing; therefore, only Medicare
Part B beneficiaries are sampled through that methodology. In regards
to the commenters concern that using all-payer data would create an
inequitable assessment of the MIPS eligible clinicians' performance on
quality, we respectfully disagree. Rather, we believe that utilizing
all-payer data will provide a more complete picture of the MIPS
eligible clinicians' performance.
Comment: A few commenters suggested that rather than collecting
data from all-payers for the quality performance category under MIPS,
CMS should consider the federated data model, which would allow for
different datasets to feed into a single virtual dataset that would
organize the data. The commenters stated this would allow analysis and
comparisons across datasets without structuring all of the source
databases.
Response: We thank the commenters for this feedback and will take
into consideration for development in future rulemaking.
Comment: Other commenters stated that the practice of medicine will
be compromised by linking payment to collection of private patient data
and making it available to CMS through electronic medical records.
Response: We believe that MIPS eligible clinicians will continue to
uphold the highest ethical standards of their professions and that
medical practice will not be compromised by the MIPS program.
Clinicians may elect to report information at the aggregate level which
does not have any patient-identifiable information.
Comment: Other commenters were very concerned that increasing the
reporting threshold for quality data from 50 percent or more of
Medicare patients to 90 percent or more of all patients regardless of
payer is a major change that should be approached more gradually to
give clinicians a chance to adapt. The commenters suggested a more
gradual change, at least in the first few years, such as keeping the
patient base and threshold as is (50 percent or more of the Medicare
population) or even a smaller increase in threshold (maybe 60 or 75
percent of patients) but only for Medicare beneficiaries rather than
all payers. Another commenter requested reporting go from 50 to 75
percent and be applied to Medicare patients only (as opposed to private
insurance patients).
Response: We are modifying our proposal and finalizing a 50 percent
threshold for individual MIPS eligible clinicians or groups submitting
data on quality measures using QCDRs, qualified registries, via EHR, or
Medicare Part B claims. In addition, we are finalizing our approach of
including all-payer data for the QCDR, qualified registry, and EHR
submission mechanisms because we believe this approach provides a more
complete picture of each MIPS eligible clinician's scope of practice
and provides more access to data about specialties and subspecialties
not currently captured in PQRS.
Comment: Some commenters questioned CMS's ability to validate data
completeness criteria for all-payer data under the quality performance
category. They stated that because of this, all-payer completeness
criteria function more like a request than a requirement. The
commenters also requested information on what the auditing,
notification, and appeal (targeted review) process will be specific to
all-payer data completeness.
Response: We recognize that our data completeness criteria are
different since we are now requiring all-payer data. However, we do not
currently have the optimal capability to validate data completeness for
all-payer data. Please note validation of all-payer data will therefore
continue to be reviewed based on the data submission mechanism used.
For example, if the quality measure data is submitted directly from an
EHR for an electronic Clinician Quality Measure (eCQM), we expect
completeness from EHR reports will cover all of the patients that meet
the inclusion criteria for the measure, to include all-payer data found
within the
[[Page 77125]]
EHR data set for the population attributed to that measure. If the
quality data is submitted via the CMS Web Interface, we will provide
the sample of patients that must be reported on to CMS, though more may
be included given the all-payer allowance under MIPS. For the
transition year of MIPS we expect that MIPS eligible clinicians, and
especially third party intermediaries, will comply fully with the
requirements we are adopting.
Comment: Another commenter was supportive of the proposal to
require MIPS eligible clinicians reporting quality data via qualified
registries or EHR to report on both Medicare and non-Medicare patients.
The commenter favored the proposal because it would be administratively
easier and because quality of care affects all patients, not just those
covered by Medicare.
Response: We thank the commenter for their support.
Comment: Some commenters agreed that CMS should include all-payer
data in order to push quality improvement throughout the entire health
care system. The commenters were concerned, however, that including
all-payer data, combined with the amount of flexibility some clinicians
have in choosing which quality measures to report, may end up obscuring
the quality of care actually received by Medicare beneficiaries. The
commenters recommended CMS implement additional requirements or safe
guards for the inclusion of all-payer data. The commenters also
supported CMS raising the data completeness thresholds above what was
required under PQRS and increasing these thresholds even higher in
future years of MIPS. Some commenters recommended that CMS continue to
encourage the creation of databases across the payer community but
treat this as a long-term goal rather than yet another operational item
with uncertain implications. Although commenters supported all-payer
databases conceptually, they believed that operationally the United
States is far from this reality.
Response: We agree that there is potential for further quality
improvement by utilizing all-payer data. We also believe the MIPS
program's flexibility in measure selection is an asset. We will monitor
the MIPS program's impacts on care quality carefully, particularly for
Medicare beneficiaries.
Comment: Some commenters suggested changing the 90 percent of
patients' measures group reporting requirement to 25 patients per
surgeon and suggested this will achieve statistical validity and is
achievable level of data collection. The surgery measures groups as
defined in the proposal would then provide the commenter's practice
with highly valuable information that could benefit all patients as the
MIPS eligible clinicians review ways to operate more safely,
efficiently and at a lower cost. Another commenter recommended that CMS
update patient sampling requirements over time.
Response: We are modifying our proposal and finalizing a 50 percent
threshold for the transition year of MIPS for individual MIPS eligible
clinicians or groups submitting data on quality measures using QCDRs,
qualified registries, via EHR, or Medicare Part B claims. In addition,
we are finalizing our approach of including all-payer data for the
QCDR, qualified registry, and EHR submission mechanisms because we
believe this approach provides a more complete picture of each MIPS
eligible clinician's scope of practice and provides more access to data
about specialties and subspecialties not currently captured in PQRS. We
have removed the measures groups referenced in the comment and replaced
them with specialty-specific measure sets.
Comment: A few commenters sought clarification on scoring when a
MIPS eligible clinician fails to submit data for the required 80 or 90
percent data completeness threshold; that is, where a MIPS eligible
clinician reports on less than the 80 or 90 percent of patients but has
a greater than zero performance rate.
Response: We appreciate the commenter seeking clarification. As
discussed, we are reducing the threshold for the data completeness
requirement as outlined below for the transition year of MIPS. In
addition, we proposed that measures that fell below the data
completeness threshold to be assessed a zero; however, in alignment
with the goal to provide as many flexibilities to MIPS eligible
clinicians as possible, for the transition year, MIPS eligible
clinicians whose measures fall below the data completeness threshold
would receive 3 points for submitting the measure. We will revisit data
completeness scoring policies through future rulemaking. It is
important to note that we are also finalizing to ramp up the data
completeness threshold to 60 percent for MIPS, for performance periods
occurring in 2018, for data submitted on quality measures using QCDRs,
qualified registries, via EHR, or Medicare Part B claims. In addition,
these thresholds for data submitted on quality measures using QCDRs,
qualified registries, via EHR, or Medicare Part B claims will increase
for MIPS for performance periods occurring in 2019 and forward.
As a result of the comments regarding our proposal on data
completeness criteria we are not finalizing our policy as proposed.
Rather we are finalizing at Sec. 414.1340 the data completeness
criteria below for MIPS during the 2017 performance period.
Individual MIPS eligible clinicians or groups submitting
data on quality measures using QCDRs, qualified registries, or via EHR
must report on at least 50 percent of the MIPS eligible clinician or
group's patients that meet the measure's denominator criteria,
regardless of payer for the performance period. In other words, for
these submission mechanisms, we expect to receive quality data for both
Medicare and non-Medicare patients. For the transition year, MIPS
eligible clinicians whose measures fall below the data completeness
threshold of 50 percent would receive 3 points for submitting the
measure.
Individual MIPS eligible clinicians submitting data on
quality measures data using Medicare Part B claims, would report on at
least 50 percent of the Medicare Part B patients seen during the
performance period to which the measure applies. For the transition
year, MIPS eligible clinicians whose measures fall below the data
completeness threshold of 50 percent would receive 3 points for
submitting the measure.
Groups submitting quality measures data using the CMS Web
Interface or a CMS-approved survey vendor to report the CAHPS for MIPS
survey must meet the data submission requirements on the sample of the
Medicare Part B patients CMS provides.
We are also finalizing to ramp up the data completeness threshold
to 60 percent for MIPS for performance periods occurring in 2018 for
data submitted on quality measures using QCDRs, qualified registries,
via EHR, or Medicare Part B claims. We note that these thresholds for
data submitted on quality measures using QCDRs, qualified registries,
via EHR, or Medicare Part B claims will increase for performance
periods occurring in 2019 and onward. As noted in our proposal, we
believe higher thresholds are appropriate to ensure a more accurate
assessment of a MIPS eligible clinician's performance on the quality
measures and to avoid any selection bias. In addition, we would like to
align all the reporting mechanisms as closely as possible with
achievable data completeness criteria.
We are finalizing our approach of including all-payer data for the
QCDR,
[[Page 77126]]
qualified registry, and EHR submission mechanisms because we believe
this approach provides a more complete picture of each MIPS eligible
clinician's scope of practice and provides more access to data about
specialties and subspecialties not currently captured in PQRS. In
addition, those clinicians who utilize a QCDR, qualified registry, or
EHR submission must contain a minimum of one quality measure for at
least one Medicare patient.
We are not finalizing our proposal that MIPS eligible clinicians
and groups who do not meet the proposed submission criteria noted below
would fail the quality component of MIPS. Instead, those MIPS eligible
clinicians who fall below the data completeness thresholds would have
their specific measures that fall below the data completeness threshold
not scored for the transition year of MIPS. The MIPS eligible
clinicians would receive 3 points for measures that fall below the data
completeness threshold.
(c) Summary of Data Submission Criteria
Table 5 of the rule, reflects our final Quality Data Submission
Criteria for MIPS:
Table 5--Summary of Final Quality Data Submission Criteria for MIPS Payment Year 2019 via Part B Claims, QCDR,
Qualified Registry, EHR, CMS Web Interface, and CAHPS for MIPS Survey
----------------------------------------------------------------------------------------------------------------
Submission
Performance period Measure type mechanism Submission criteria Data completeness
----------------------------------------------------------------------------------------------------------------
A minimum of one continuous 90- Individual MIPS Part B Claims.... Report at least six 50 percent of
day period during CY2017. eligible measures including MIPS eligible
clinicians. one outcome measure, clinician's
or if an outcome Medicare Part B
measure is not patients for the
available report performance
another high priority period.
measure; if less than
six measures apply
then report on each
measure that is
applicable. MIPS
eligible clinicians
and groups will have
to select their
measures from either
the list of all MIPS
Measures in Table A
or a set of specialty-
specific measures in
Table E.
A minimum of one continuous 90- Individual MIPS QCDR Qualified Report at least six 50 percent of
day period during CY2017. eligible Registry EHR. measures including MIPS eligible
clinicians or one outcome measure, clinician's or
Groups. or if an outcome groups patients
measure is not across all
available report payers for the
another high priority performance
measure; if less than period.
six measures apply
then report on each
measure that is
applicable. MIPS
eligible clinicians
and groups will have
to select their
measures from either
the list of all MIPS
Measures in Table A
or a set of specialty-
specific measures in
Table E.
Jan 1-Dec 31................... Groups........... CMS Web Interface Report on all measures Sampling
included in the CMS requirements for
Web Interface; AND their Medicare
populate data fields Part B patients.
for the first 248
consecutively ranked
and assigned Medicare
beneficiaries in the
order in which they
appear in the group's
sample for each
module/measure. If
the pool of eligible
assigned
beneficiaries is less
than 248, then the
group would report on
100 percent of
assigned
beneficiaries.
Jan 1-Dec 31................... Groups........... CAHPS for MIPS CMS-approved survey Sampling
Survey. vendor would have to requirements for
be paired with their Medicare
another reporting Part B patients.
mechanism to ensure
the minimum number of
measures are
reported. CAHPS for
MIPS Survey would
fulfill the
requirement for one
patient experience
measure towards the
MIPS quality data
submission criteria.
CAHPS for MIPS Survey
will only count for
one measure.
----------------------------------------------------------------------------------------------------------------
(4) Application of Quality Measures to Non-Patient Facing MIPS Eligible
Clinicians
Section 1848(q)(2)(C)(iv) of the Act provides that the Secretary
must give consideration to the circumstances of non-patient facing MIPS
eligible clinicians and may, to the extent feasible and appropriate,
take those circumstances into account and apply alternative measures or
activities that fulfill the goals of the applicable performance
category to such clinicians. In doing so, the Secretary must consult
with non-patient facing MIPS eligible clinicians.
In addition, section 1848(q)(5)(F) to the Act allows the Secretary
to re-weight MIPS performance categories if there are not sufficient
measures and activities applicable and available to each type of MIPS
eligible clinician. We assume many non-patient facing MIPS eligible
clinician will not have sufficient measures and activities applicable
and available to report and will not be scored on the quality
performance category under MIPS. We refer readers to the proposed rule
(81 FR 28247) to the discussion on how we address performance
categories weighting for MIPS eligible clinicians for whom no measures
exist in a given performance category.
In the MIPS and APMs RFI, we solicited feedback on how we should
apply the four MIPS performance categories to non-patient facing MIPS
eligible clinicians and what types of measures and/or improvement
activities (new or from other payments systems) would be appropriate
for these MIPS
[[Page 77127]]
eligible clinicians. We also engaged with seven separate organizations
representing non-patient facing MIPS eligible clinicians in the areas
of anesthesiology, radiology/imaging, pathology, and nuclear medicine,
specifically cardiology. Organizations we spoke with representing
several specialty areas indicated that Appropriate Use Criteria (AUC)
can be incorporated into the improvement activities performance
category by including activities related to appropriate assessments and
reducing unnecessary tests and procedures. AUC are distinct from
clinical guidelines and specify when it is appropriate to use a
diagnostic test or procedure--thus reducing unnecessary tests and
procedures. Use of AUC is an important improvement activities as it
fosters appropriate utilization and is increasingly used to improve
quality in cardiovascular medicine, radiology, imaging, and pathology.
These groups also highlighted that many non-patient facing MIPS
eligible clinicians have multiple patient safety and practice
assessment measures and activities that could be included, such as
activities that are tied to their participation in the Maintenance of
Certification (MOC) Part IV for improving the clinician's practice. One
organization expressed concern that because their quality measures are
specialized, some members could be negatively affected when comparing
quality scores because they did not have the option to be compared on a
broader, more common set of measures. The MIPS and APMs RFI commenters
noted that the emphasis should be on measures and activities that are
practical, attainable, and meaningful to individual circumstances and
that measurement should be as outcomes-based to the extent possible.
The MIPS and APMs RFI commenters emphasized that improvement activities
should be selected from a very broad array of choices and that ideally
non-patient facing MIPS eligible clinicians should help develop those
activities so that they provide value and are easy to document. For
more details regarding the improvement activities performance category
refer to the proposed rule (81 FR 28209). The comments from these
organizations were considered in developing these proposals.
We understand that non-patient facing MIPS eligible clinicians may
have a limited number of measures on which to report. Therefore, we
proposed at Sec. 414.1335 that non-patient facing MIPS eligible
clinicians would be required to meet the otherwise applicable
submission criteria, but would not be required to report a cross-
cutting measure.
Thus we would employ the following strategy for the quality
performance criteria to accommodate non-patient facing MIPS eligible
clinicians:
Allow non-patient facing MIPS eligible clinicians to
report on specialty-specific measure set (which may have fewer than the
required six measures).
Allow non-patient facing MIPS eligible clinicians to
report through a QCDR that can report non-MIPS measures.
Non-patient facing MIPS eligible clinicians would be
exempt from reporting a cross-cutting measure as proposed at Sec.
414.1340.
We requested comments on these proposals.
The following is summary of the comments we received regarding our
proposals on the application of quality measures to non-patient facing
MIPS eligible clinicians:
Comment: Several commenters supported the proposed exemption from
reporting a cross-cutting quality measure for non-patient facing MIPS
eligible clinicians as these measures may not be reliable,
developmentally feasible, or clinically relevant as well as the
allowance for non-patient facing MIPS eligible clinicians to report on
specialty-specific measure sets.
Response: We agree, however, as we have noted earlier in this rule
we do not intend to finalize the cross-cutting measure requirements for
all MIPS eligible clinicians, including those that are determined to be
non-patient facing MIPS eligible clinicians.
Comment: Another commenter wanted more details on CMS's
considerations for non-patient facing MIPS eligible clinicians under
the quality performance category.
Response: We thank the commenter for their question. As we are not
finalizing our proposal for cross-cutting measures, we do not need to
finalize our proposal for a separate designation for non-patient facing
MIPS eligible clinicians at this time. We refer readers to section
II.E.1.b. of this final rule with comment period for more information
on non-patient facing MIPS eligible clinicians.
Comment: Other commenters proposed that CMS remove the quality
measure requirement related to patient outcomes for non-patient facing
MIPS eligible clinicians.
Response: We proposed to provide an exception for non-patient
facing MIPS eligible clinicians from the requirement to report cross-
cutting measures, but we believe that outcome measures are of critical
importance to quality measurement. Therefore, we do not believe an
additional exception is appropriate.
After consideration of the comments received regarding our
proposals on application of the quality category to non-patient facing
MIPS eligible clinicians we are not finalizing as proposed. As
previously noted in this rule, we are not finalizing the criteria
proposed at Sec. 414.1335 that MIPS eligible clinicians that are
considered patient facing must report a cross-cutting measure. The only
distinction within the quality performance for non-patient facing MIPS
eligible clinicians as proposed at Sec. 414.1335 is that they were not
required to report a cross-cutting measure. We are therefore finalizing
at Sec. 414.1335 that non-patient facing MIPS eligible clinicians
would be required to meet the otherwise applicable submission criteria
that apply for all MIPS eligible clinicians for the quality performance
category.
(5) Application of Additional System Measures
Section 1848(q)(2)(C)(ii) of the Act provides that the Secretary
may use measures used for payment systems other than for physicians,
such as measures used for inpatient hospitals, for purposes of the
quality and cost performance categories. The Secretary may not,
however, use measures for hospital outpatient departments, except in
the case of items and services furnished by emergency physicians,
radiologists, and anesthesiologists.
In the MIPS and APMs RFI, we sought comment on how we could best
use this authority. Some facility-based commenters requested a
submission option that allows the MIPS eligible clinician to be scored
based on the facility's measures. These commenters noted that the care
they provide directly relates to and affects the facility's overall
performance on quality measures and that using this score may be a more
accurate reflection of the quality of care they provide than the
quality measures in the PQRS or the VM program.
We will consider an option for facility-based MIPS eligible
clinicians to elect to use their institution's performance rates as a
proxy for the MIPS eligible clinician's quality score. We are not
proposing an option for the transition year of MIPS because there are
several operational considerations that must be addressed before this
option can be implemented. We requested comment on the following
issues: (1) whether we should attribute a facility's performance to a
MIPS eligible clinician for purposes of the
[[Page 77128]]
quality and cost performance categories and under what conditions such
attribution would be appropriate and representative of the MIPS
eligible clinician's performance; (2) possible criteria for attributing
a facility's performance to a MIPS eligible clinician for purposes of
the quality and cost performance categories; and (3) the specific
measures and settings for which we can use the facility's quality and
cost data as a proxy for the MIPS eligible clinician's quality and cost
performance categories; and (4) if attribution should be automatic or
if a MIPS eligible clinician or group should elect for it to be done
and choose the facilities through a registration process. We may also
consider other options that would allow us to gain experience. We
solicited comments on these approaches.
The following is summary of the comments we received regarding our
approaches to application of additional system measures:
Comment: The majority of commenters that discussed the potential
use of facility performance supported our proposal to attribute a
facility's performance to a MIPS eligible clinician for purposes of the
quality and cost performance categories. Several commenters urged CMS
to implement a CMS hospital quality program measure reporting option
for hospital-based clinicians in the MIPS as soon as possible. Other
commenters believed that using hospital measure performance in the MIPS
would help clinicians and hospitals better align quality improvement
goals and processes across the care continuum and reduce data
collection burden. One commenter thought that attributing facility
performance for the purposes of the quality and cost performance
categories could encourage harmony between the performance agendas of
clinicians and their facilities. Another commenter supported a
streamlined measurement approach for MIPS reporting for hospital based
clinicians and alignment of MIPS measures with hospital measures.
One commenter believed that hospital quality reporting should
substitute for MIPS quality reporting for hospital based clinicians.
While another commenter specified that hospital measures should only be
used for the quality performance category, not for the cost performance
category. Another commenter strongly recommended CMS either allow
hospital based clinicians to use hospital quality measures for MIPS
reporting, or exempt hospital based clinicians from the quality
performance category until there is substantial alignment of clinician
and hospital measures. This commenter requested that such exemption be
the same as the hospital based clinician exemption under the advancing
care information performance category.
Response: We agree that using hospital measure performance may
promote more harmonized quality improvement efforts between hospital-
based clinicians and hospitals and promote care coordination across the
care continuum. We are considering appropriate attribution policies for
facility-based measures and will take commenter's suggestions into
account in future rulemaking.
Comment: Several commenters opposed using a facility's quality and
cost performance as a proxy for MIPS eligible clinicians. A few
commenters did not support inclusion of other system measures at this
time and stated that this could potentially create an additional burden
for vendors to provide additional reporting measures which they had not
previously developed or mapped out workflows for. One commenter did not
support attributing a facility's performance to a MIPS eligible
clinician for the quality and cost performance categories, noting that
facility-level performance would not be appropriate or representative
of the MIPS eligible clinician's individual performance. One commenter
expressed concern that this approach would potentially benefit MIPS
eligible clinicians with lower individual performance and would be a
detriment for those with higher performance, for whom being assessed
based on facility performance could potentially lead to lower ratings.
Another commenter expressed concern that MIPS eligible clinicians
substituting their institution's performance for their own might give
an unfair advantage to MIPS eligible clinicians from larger systems.
This commenter also requested that CMS pilot system measures prior any
implementation of facility performance attribution under MIPS.
Another commenter opposed our proposed use of facility level
measures for accountability at the individual level as facility
performance as they believed it is not within the control of individual
clinicians. Another commenter requested that facility-based MIPS
eligible clinicians leverage continued expansion of specialty-specific
measure sets through QCDRs and qualified registries instead of using
facility-based scores. Another commenter noted that adding an
additional group reporting option for facility-based MIPS eligible
clinicians on top of the existing group reporting option is confusing.
The commenter therefore recommended CMS remove this reporting option
from the proposal. One commenter encouraged revisiting this proposal in
future years.
Response: The commenter is correct that many quality measures are
not designed for team-based care in the inpatient setting, and we
intend to examine how best to measure care provided by hospitalists and
other team-based MIPS eligible clinicians in the future. We believe
that facility-based quality measures have the potential to harmonize
quality improvement efforts between hospital-based clinicians and
hospitals, and promote care coordination across the care continuum. We
agree that it is important to develop a thoughtful attribution policy
that captures the eligible clinician's contribution and intend to
develop appropriate attribution policies for facility-based measures.
Comment: One commenter requested clarification on how CMS would
expect reporting of facility-based measures to work under MIPS in
instances where hospitals, their practices, and their EDs all use
separate EHRs. This commenter also requested clarification on CEHRT/
certification requirements and what vendors would be required to do
under such a scenario. Another commenter wanted to know whether MIPS
eligible clinicians would be subject to a facility's performance score
for quality and cost if facility-based measures were to be integrated
into MIPS in future years. One commenter recommended CMS make
additional information available regarding the use of facility measures
for the cost performance category and publish information about the
extent to which this option may improve participation by clinicians who
are predicted to be unable to participate in the cost performance
category of MIPS. Another commenter requested clarification on the
specific MIPS eligible clinicians that would be considered facility-
based MIPS eligible clinicians.
Response: We recognize that there are challenges associated with
health information exchange within institutions and should we adopt
policies for facility-based measures in future rulemaking, we would
provide more information via subregulatory guidance. We believe that it
is important to develop a thoughtful attribution policy that captures
the MIPS eligible clinician's contribution and intend to develop
appropriate attribution policies for facility-based measures.
Comment: One commenter requested CMS develop MIPS participation
options that apply to hospital's quality
[[Page 77129]]
and cost performance category measures to their employed clinicians and
that CMS should seek input from hospitals, clinicians, and other
stakeholders to establish processes and design implementation of this
option. Another commenter recommended that prior to implementing any
facility-level measures into the MIPS program, CMS should work with
measure stewards and applicable specialties to ensure that measure
specifications are appropriately aggregated to the clinician level and
are reflective of those factors within the clinician's control.
Response: We appreciate the suggestions and intend to work closely
with stakeholders as we examine how best to measure care provided by
hospitalists and other team-based MIPS eligible clinicians in the
future. We believe that it is important to develop a thoughtful
attribution policy that captures the MIPS eligible clinician's
(including those employed by hospitals) contribution and intend to
develop appropriate attribution policies for facility-based measures.
Comment: One commenter suggested CMS use active membership on a
hospital's medical staff or proof of an employment contract that is
effective for the measurement period as evidence of an existing
relationship between the clinician and a facility, which will be needed
in order to verify a clinician's eligibility to use facility-based
measures. However, several commenters believed that claims data
elements could provide sufficient proof of such a relationship. Another
commenter recommended CMS use specific claims data elements such as
inpatient and hospital outpatient department place-of-service codes as
evidence. One commenter suggested that CMS could consider adopting some
of the following criteria: the facility-based MIPS eligible clinician
or group is an employee of the facility; the facility-based MIPS
eligible clinician or group is not an employee of the facility, but has
a contract with the facility or the privileges needed to perform
services at the facility; and the MIPS eligible clinician or group is
an owner, co-owner, and/or investor of the facility and performs
medical services in the facility.
The same commenter proposed the following options for attribution:
Option 1: The facility-based MIPS eligible clinician performed a
plurality of his or her services at the facility in the performance
period. This proposed method for attribution generally aligned with the
Value-Based Payment Modifier two-step attribution methodology for
purposes of MIPS quality and cost measurement proposed in other parts
of the MACRA rule, which attributes a given patient to a clinician if
the clinician has performed a plurality of the primary care services
for a patient in the performance period. Option 2: The facility-based
MIPS eligible clinician or group would have a payment amount threshold
or patient count threshold at the facility that meets the payment
amount threshold or patient count threshold finalized for purposes of
eligibility to participate in an Advanced APM.
Another commenter mentioned that in adopting additional system
measures, CMS should ensure that attribution is appropriate and
relevant to clinicians, to consider a methodology that enables
proportional attribution that is as close a proxy for a group as
possible, and to ensure that clinician performance is captured across
settings.
Response: We will continue to seek opportunities to improve our
attribution process including the consideration of claims based codes
with place-of-service modifiers among the array of options to best
attribute eligible clinicians.
Comment: The majority of commenters that supported the use of
additional systems measures supported them only in cases where the
facility-based clinician could elect use of the facility-based
measures. They did not support automatic attribution of facility based
measures. Some commenters believed that the MIPS eligible clinician
should be able to elect to be attributed to the facility and also
choose the appropriate facility through a registration process. One
commenter noted that many MIPS eligible clinicians see patients at
multiple facilities, and thus should be able to choose so which
facility would most accurately align with their actual practice
patterns.
One commenter recommended CMS explore the possibility of allowing
some clinicians to report their skilled nursing facility (SNF) scores
as their MIPS scores. Another commenter urged as much flexibility as
possible in the program and believed that SNF-based measurement should
always be an optional approach, particularly for those who practice in
a single facility. Another commenter recommended that quality and cost
performance measures under MIPS always be attributed to the SNF TIN, as
incentive payment adjustments would only be applicable at the facility
TIN level. Furthermore, the commenter stated that the attribution to
the SNF TIN would need to be automatic for clinicians working in
facility-based outpatient environments. One commenter recommends self-
nomination at the TIN level because this would allow a group to attest
that it is apprised of primarily hospital-based clinicians. This
commenter noted that it would ensure that only the clinicians who wish
to have this level of facility alignment are included in the program.
It will also permit clinicians to select which hospitals are
appropriate for alignment, allows for the inclusion of multiple
hospitals, and would allow for the fact that many hospitalist groups
practice in multiple locations. They also stated that this option would
allow clinicians to align their performance on selected measures with
their hospitals, which would support the drive towards team-based,
coordinated care.
One commenter noted the challenges faced by clinicians and groups
that provide care across multiple facilities and recommended hospital-
level risk-adjusted outcome measurement that is attributable to the
principal clinician or group responsible for the primary diagnosis.
Another commenter stated that as an alternative to substituting
facility measures under the MIPS program, facility-based clinicians
ought to be given the option of being treated as participating in an
Advanced APM.
One commenter requested further clarification on the proxy scoring
using facility's quality reporting. This commenter requested examples
of proxy scoring, and wanted to see quality performance category
scoring in practice before making a recommendation. Another commenter
urged CMS to allow the use of PCHQR scores as a proxy for quality
performance, for clinicians at PPS-exempt cancer hospitals. A couple of
commenters urged CMS to make nearly all of the measures from CMS's
hospital quality reporting and pay-for-performance programs available
for use in hospital-based clinician reporting options. One commenter
proposed the following criteria for evaluating measures: clinicians
could use quality and cost measures for patient conditions and episode
groups (currently under development) for which CMS has assigned them a
clearly defined and clinically meaningful relationship under the
patient relationship assignment methodology (currently under
development). This commenter suggested that each evidence-based quality
measure would be counter-balanced with an appropriate cost measure and
that measures potentially could focus on patient safety, high quality
care delivery, patient-centered care, communication, care coordination,
and cost efficiency.
Several commenters suggested measures to be adopted. One commenter
suggested the following: PCP notification at admission, PCP
[[Page 77130]]
notification at discharge, percentage of beneficiaries with appointment
with a PCP within 7 days, and percentage of beneficiaries with
appointment with PCP within 30 days. This commenter believed that
facility based MIPS eligible clinicians' play a valuable and
underutilized role in care coordination and that Medicare stakeholders
will benefit by MIPS eligible clinician inclusion versus exclusion.
This commenter further recommended that facility based MIPS eligible
clinicians have the ability to submit via institutional metrics and
suggested PCP measures. Another commenter suggested several payment and
costs measures such as: The Medicare Spending Per Beneficiary Measure;
Pneumonia Payment per Episode of Care; the Cellulitis Clinical Episode-
based Payment Measure; the Kidney/UTI Clinical Episode-based Payment
Measure; and the Gastrointestinal Hemorrhage Clinical Episode-based
Payment Measure. Another commenter recommended the following measures:
(1) Severe Sepsis and Sepsis Shock: Management Bundle; (2) HCAHPS
(physician questions and 3-Item Care Transition Measure); (3) Hospital-
wide All-Cause Unplanned Readmission; (4) NHSN Measures (including
CAUTI, CLABSI, CDI, And MRSA); (4) COPD Measures (COPD 30-Day Mortality
Rate and COPD Readmission Rate); (5) Pneumonia Measures (Pneumonia 30-
Day Mortality Rate, Pneumonia 30-Day Readmission Rate, and Pneumonia
Payment per Episode of care); (6) Heart Failure Measures (Heart Failure
30-Day Mortality Rate, Heart Failure 30-Day Readmission Rate, Heart
Failure Excess Days); (7) Payment Measures (MSPB); and (8) Chart
Abstracted Clinical Measures (Influenza Immunization and Admit Decision
Time to ED Departure Time for Admitted Patients).
One commenter believed that clinicians who are MIPS eligible
clinicians, and work primarily in either an outpatient or inpatient
site--or both, as cancer care clinicians often do--should have the
ability to choose the measures most relevant to them. A commenter
recommended that MIPS eligible clinicians be able to align with
hospitals, surgery centers, or other types of institutions to utilize
patient experience survey metrics that are already collected as part of
other quality reporting programs, in order to enable these metrics to
be used as facility-based measures. Another commenter believed it was
important for CMS to ensure that only visits, medications, tests,
surgeries, and other components of maintenance for a disease that are
ordered by a MIPS eligible clinician are attributed to the MIPS
eligible clinician's quality and cost scores.
One commenter urged CMS to enable a transplant surgeon and other
members of the transplant team to elect to use their institution's
performance rates under the outcomes requirements set forth at 42 CFR
482.80(c) and 482.82(c) as a proxy for their quality performance
category score. This commenter believed that a transplant surgeon or
other MIPS eligible clinician or group's election to use their
institutions performance data should not be automatic but the
clinician's choice. Another commenter noted that a facility-based
performance option would be beneficial to those clinicians involved in
palliative care, and requested CMS allow for measures such as those
used under the Hospice Quality Reporting Program to be considered
facility-based measures under MIPS.
Response: We would like to explain that under section 1848(q)(5)(H)
of the Act we may include data submitted by MIPS eligible clinicians
with respect to items and services furnished to individuals who are not
individuals entitled to benefits under part A or enrolled under part B.
We will take these suggestions into consideration as we move towards
implementing these additional flexibilities in the future.
We will take these comments into consideration in future
rulemaking.
(6) Global and Population-Based Measures
Section 1848(q)(2)(C)(iii) of the Act provides that the Secretary
may use global measures, such as global outcome measures, and
population-based measures for purposes of the quality performance
category.
Under the current PQRS program and Medicare EHR Incentive Program
quality measures are categorized by domains which include global and
population-based measures. We identified population and community
health measures as one of the quality domains related to the CMS
Quality Strategy and the NQS priorities for health care quality
improvement discussed in the proposed rule (81 FR 28192). Population-
based measures are also used in the Medicare Shared Savings Program and
for groups in the VM Program. For example, in 2015, clinicians were
held accountable for a component of the AHRQ population-based,
Ambulatory Care Sensitive Condition measures as part of a larger set of
Prevention Quality Indicators (PQIs). Two broader composite measures of
acute and chronic conditions are calculated using the respective
individual measure rates for VM Program calculations. These PQIs assess
the quality of the health care system as a whole, and especially the
quality of ambulatory care, in preventing medical complications that
lead to hospital admissions.
In the CY 2015 PFS final rule with comment period (79 FR 67909),
Medicare Payment Advisory Commission (MedPAC) commented that we should
move quality measurement four ACOs, Medicare Advantage (MA) plans, and
FFS Medicare in the direction of a small set of population-based
outcome measures, such as potentially preventable inpatient hospital
admissions, ED visits, and readmissions. In the June 2014 MedPAC Report
to the Congress: Medicare and the Health Care Delivery System, MedPAC
suggests considering an alternative quality measurement approach that
would use population-based outcome measures to publicly report on
quality of care across Medicare's three payment models, FFS, Medicare
Advantage, and ACOs.
In creating policy for global and population-based measures for
MIPS we considered a more broad-based approach to the use of ``global''
and ``population-based'' measures in the MIPS quality performance
category. After considering the above we proposed to use the acute and
chronic composite measures of AHRQ PQIs that meet a minimum sample size
in the calculation of the quality measure domain for the MIPS total
performance score; see Table B of the Appendix in this final rule with
comment period. MIPS eligible clinicians would be evaluated on their
performance on these measures in addition to the six required quality
measures discussed previously and summarized in Table A of the Appendix
in this final rule with comment period. Based on experience in the VM
Program, these measures have been determined to be reliable with a
minimum case size of 20. Average reliabilities for the acute and
chronic measures range from 0.64 to 0.79 for groups and individual MIPS
eligible clinicians. We intend to incorporate a clinical risk
adjustment as soon as feasible to the PQI composites and continue to
research ways to develop and use other population-based measures for
the MIPS program that could be applied to greater numbers of MIPS
eligible clinicians going forward. In addition to the acute and chronic
composite measure, we also proposed to include the all-cause hospital
readmissions (ACR) measure from the VM Program as we believe this
measure also encourages care coordination. In
[[Page 77131]]
the CY 2016 Medicare PFS final rule (80 FR 71296), we did a reliability
analysis that indicates this measure is not reliable for solo
clinicians or practices with fewer than 10 clinicians; therefore, we
proposed to limit this measure to groups with 10 or more clinicians and
to maintain the current VM Program requirement of 200 cases. Eligible
clinicians in groups with 10 or more clinicians with sufficient cases
would be evaluated on their performance on this measure in addition to
the six required quality measures discussed previously and summarized
in Table A of the Appendix of this final rule with comment period.
Furthermore, the proposed claims-based population measures would
rely on the same two-step attribution methodology that is currently
used in the VM Program (79 FR 67961 through 67694). The attribution
focuses on the delivery of primary care services (77 FR 69320) by both
primary care physicians and specialists. This attribution logic aligns
with the total per capita measure and is similar to, but not exactly
the same, as the assignment methodology used for the Shared Savings
Program. For example, the Shared Savings Program definition of primary
care services can be found at Sec. 425.20 and excludes claims for
certain Skilled Nursing Facility (SNF) services that include the POS 31
modifier). In the proposed rule (81 FR 28199), we proposed to exclude
the POS 31 modifier from the definition of primary care services. As
described in the proposed rule (81 FR 28199), the attribution would be
modified slightly to account for the MIPS eligible clinician
identifiers. We solicited comments on additional measures or measure
topics for future years of MIPS and attribution methodology. We
requested comments on these proposals.
The following is summary of the comments we received regarding our
proposal on global and population-based measures:
Comment: Several commenters supported the importance of including
sociodemographic factor risk adjustments in the quality and cost
measures used to determine payments to MIPS eligible clinicians. One
commenter stated that risk adjustment is a widely accepted approach to
account for factors outside of the control of clinicians. Another
commenter supported adjusting quality measures to reflect
sociodemographic status (SDS), when appropriate, because measurement
systems that do not incorporate such factors into evaluation can shift
resources away from low-income communities through penalties. The
commenter requested CMS adopt adjustments to quality measures that are
affected by SDS, such as readmission within 30 days of discharge.
Another commenter stated that sociodemographic issues, such as the
inability to purchase medication and lack of family support, can
increase cost related to future MIPS eligible clinician visits, and
emergency room visits and readmissions. The commenter requested a level
of protection for situations beyond a clinician's control that can play
a major role in an individual's health outcome.
A few commenters supported the inclusion of risk adjustment in
measures and suggested that CMS examine ASPE's future recommendations.
One commenter recommended that CMS examine ASPE's recommendations to
consider other strategies as well such as stratification. Other
commenters stated that the stakeholders affected by these decisions
should have an opportunity to review the risk adjustment findings once
issued by ASPE, and comment on how CMS proposes to incorporate the ASPE
findings into its quality metrics.
Several commenters urged CMS to work with the National Quality
Forum (NQF) on how best to proceed with risk adjustment of quality and
cost measures for sociodemographic status. One commenter recommended
CMS adopt the NQF recommendation to consider risk adjustment for
measures that have a conceptual relationship between sociodemographic
factors and outcomes.
Response: We appreciate the feedback on the role of socioeconomic
status in quality measurement. We continue to evaluate the potential
impact of social risk factors on measure performance. One of our core
objectives is to improve beneficiary outcomes. We want to ensure that
complex patients as well as those with social risk factors receive
excellent care. While we believe the MIPS measures are valid and
reliable, we will continue to investigate methods to ensure all
clinicians are treated as fairly as possible within the program. Under
the Improving Medicare Post-Acute Transformation (IMPACT) Act of 2014,
ASPE has been conducting studies on the issue of risk adjustment for
sociodemographic factors on quality measures and cost, as well as other
strategies for including SDS evaluation in CMS programs. We will
closely examine the ASPE studies when they are available and
incorporate findings as feasible and appropriate through future
rulemaking. We look forward to working with stakeholders in this
process. We will also monitor outcomes of beneficiaries with social
risk factors, as well as the performance of the MIPS eligible
clinicians who care for them to assess for potential unintended
consequences such as penalties for factors outside the control of
clinicians.
We additionally note that the National Quality Forum (NQF) is
currently undertaking a 2-year trial period in which new measures and
measures undergoing maintenance review will be assessed to determine if
risk adjusting for sociodemographic factors is appropriate. This trial
entails temporarily allowing inclusion of sociodemographic factors in
the risk-adjustment approach for some performance measures. At the
conclusion of the trial, NQF will issue recommendations on inclusion of
sociodemographic factors in risk adjustment. We intend to continue
engaging in the NQF process as we consider the appropriateness of
adjusting for sociodemographic factors in our MIPS measures.
Comment: Several commenters recommended that CMS develop the three
population health measure benchmarks in the quality performance
category by specialty and region to ensure more accurate, appropriate
comparisons for the measures. The commenters noted this approach would
help facilitate comparisons and improve the relevance of information
for patients. The commenters stated the MACRA law does not preclude CMS
from considering specialties that practice in settings such as nursing
homes, assisted living, or home health and treating them in a different
manner, but stated it is inappropriate to assume they can be compared
to other internal medicine/family physicians that practice in the
ambulatory settings. Other commenters supported the proposed three
population-based measures that will be calculated using claims.
Response: We appreciate the commenters' support. We continue to
analyze the best means of assessing and comparing facility based
clinicians in nursing homes, assisted living, or home health
environments versus more routine ambulatory care settings. We will
consider the feasibility of adopting disparate benchmarks for the
population health measures and regional adjustments for the population
health measures in the future. We appreciate the commenters support.
However, as discussed in section II.E.5.b.(3) of this final rule with
comment period, for the transition year the MIPS, we are not finalizing
our proposal to require MIPS eligible clinicians and groups to report a
cross-cutting measures because we believe we should provide flexibility
for
[[Page 77132]]
MIPS eligible clinicians during the transition year to adjust to the
program.
Comment: Another commenter requested that CMS simplify the scoring
methodology in the quality performance category by removing the
``population health'' measures and avoiding creating different scoring
subcategories--in particular creating subcategories for MIPS eligible
clinicians in practices of 9 or fewer, which appears to create
different definitions of ``small practices'' throughout the MIPS
program. The commenter recommended that at a minimum, CMS should
provide accommodations for MIPS eligible clinicians based on the
statute's definition of a small practice--meaning 15 or fewer
professionals.
Response: We have examined the global and population-based measures
closely and have decided to not finalize these measures as part of the
quality performance category score. Specifically, we are not finalizing
the acute and chronic composite measures of AHRQ PQIs. We will,
however, calculate these measures for all MIPS eligible clinicians and
provide feedback for informational purposes as part of the MIPS
feedback.
Comment: Some commenters believed that system level and population-
based measures should be applicable to MIPS eligible clinicians, such
as pathologists, who typically furnish services that do not involve
face-to-face interaction with patients. The commenters stated that
activities such as blood utilization, infection control, and test
utilization activities, including committee participation, should be
credited to the whole group as pathology practices typically function
as one unit with different members of the group having different roles.
The commenters urged CMS to be flexible and not to focus exclusively on
measures and activities that involve face-to-face encounters, as these
would have an unfair and negative impact on the MIPS final scores of
non-patient facing MIPS eligible clinician's specialties.
Response: We agree that non-patient facing MIPS eligible clinicians
need quality measures that are applicable to their practice. We
encourage commenters to suggest specific additional measures that we
should consider in the future.
Comment: Other commenters believed the population-based measures
would be difficult without prospective enrollment that informs MIPS
eligible clinicians in advance of patients that are attributed to them.
Response: We will make every effort to provide as much information
as possible to MIPS eligible clinicians about the patients that will be
attributed to them. However, we do not believe prospective enrollment
to be feasible at this time.
Comment: Several commenters recommended that CMS use its discretion
to make proposed global and population-based measures optional under
the improvement activities performance category, rather than including
these VM Program measures into the MIPS quality performance category as
population-based health measures: The acute composite, chronic
composite, and ACR measure. The commenters were concerned that these
measures are primarily intended to be used and reported at the
metropolitan area or county level and have not been adequately tested,
rigorously assessed for appropriate sample sizes, or risk adjusted for
application at the clinician or group level. The commenters stated that
the method by which reliability rates are arrived at must be
transparent, and urged CMS to publicize the data supporting the
proposal statement that based on the VM Program, the acute and chronic
composites had an average reliability range of 0.64-0.79. The
commenters recommended that if CMS moves forward with the three
population health measures and does not make them optional, MIPS
eligible clinician performance on any administrative claims measure
should not be used for payment or be publicly reported unless they have
a reliability of 0.80, which is generally considered by statisticians
and researchers to be sufficiently reliable to make decisions about
individuals based on their observed scores. The commenters recommended
that in addition, the risk adjustment model should be developed,
tested, and released for comment prior to implementation of the
measures. Another commenter did not support the measures that are
reliable with a minimum case size of 20 and with an average range of
0.64 and 0.79 because the commenter stated that anything less than 0.9
is unreliable. The commenter requested that CMS not implement this
criterion until a risk adjustment can be implemented. Another commenter
recommended CMS reconsider its use of a minimum sample size of 20 for
calculating the cost measures, as extensive work has been done on both
quality measures and cost measures pointing to the need of a sample
size no smaller than 100 to achieve statistical stability.
Response: We have examined the global and population-based measures
closely and have decided to not finalize these measures as part of the
quality performance category score. Specifically, we are not finalizing
to use the acute and chronic composite measures of AHRQ PQIs. We agree
with commenters that additional enhancements need to be made to these
measures for inclusion of risk adjustment. We will, however, calculate
these measures for all MIPS eligible clinicians and provide feedback
for informational purposes as part of the MIPS feedback.
Comment: One commenter opposed CMS' proposal to score population
based measures during the transition year of MIPS. The commenter
requested CMS phase-in population-based measures during the first 2
years of MIPS as test measures with feedback (but not scored) so that
MIPS eligible clinicians and CMS can learn how population level
measures will impact the MIPS program.
Response: We agree with the commenter that further testing and
enhancements is required for some of these measures prior to inclusion
in the MIPS for payment purposes. Therefore, we are no longer requiring
two of the three population health measures and are only requiring the
ACR measure for groups of more than 15 instead of our proposed approach
of groups of 10 or more, assuming the case minimum of 200 cases has
been met, as discussed in section II.E.6. of this final rule with
comment period. If the case minimum of 200 cases has not been met, we
will not score this measure. The MIPS eligible clinician will not
receive a zero for this measure and this measure will not apply to the
MIPS eligible clinician's quality performance category score. We will,
however, calculate these measures for all MIPS eligible clinicians and
provide feedback for informational purposes as part of the MIPS
feedback.
Comment: Another commenter recommended assessing the ACR measure
over a longer time period as the comparable measure used for hospitals
is found to be reliable and valid only when using a 3-year rolling
average. The commenter appreciated that this measure is limited to
groups with 10 or more MIPS eligible clinicians and requires 200 cases.
Response: We believe that the measure's limitation to groups with
16 or more MIPS eligible clinicians, as well as the requirement for at
least 200 cases, ensures that the measure is sufficiently reliable for
MIPS purposes. To explain, we will not apply the ACR to solo practices
or small groups (groups of 15 or less). We will apply the ACR measure
to groups of more than 15 who meet the case volume.
Comment: Another commenter recommended that the population-based
[[Page 77133]]
measures only be applied to MIPS groups.
Response: We attempted to structure the MIPS program to be as
inclusive as possible for quality measurement purposes. Our intention
was to ensure that as many MIPS eligible clinicians as possible could
report on as many measures as possible.
Comment: Other commenters stated that MIPS is designed to determine
aggregate population-based outcome measures across clinicians in a
local area sharing the same hospitals and clinicians. The commenters
proposed that CMS share with MIPS participants average MIPS final
scores by clinician categories and cross reference comparative advanced
APM performance.
Response: We do not believe MIPS is designed to determine aggregate
population-based outcome measures. However, we have discretion to
pursue this approach if we deem appropriate. We will consider these
suggestions as we develop appropriate feedback forms for MIPS eligible
clinicians. Our intention is to provide as much information as possible
to MIPS eligible clinicians to assist with quality improvement efforts.
Comment: Other commenters disagreed with the proposed use of the
30-day ACR measure because they believed that doing so will potentially
penalize clinicians who care for the most complex patients and those of
lowest SES. They also indicated that the measure is generally
inappropriate given the lack of MIPS eligible clinician control over
some of the factors that lead to readmission. Another commenter
believed MIPS eligible clinicians are penalized for readmissions, but
not rewarded for successfully keeping people out of the hospital
completely. Other commenters expressed concern for the use of the ACR
measure because there are a multitude of factors that contribute to
readmission making it a difficult outcome to measure. The commenters
believed that there needs to be more studies prior to using the measure
at the MIPS eligible clinician level, including the impact on MIPS
eligible clinicians who serve disadvantaged populations. In addition,
the commenters believed that the measure requires risk-adjustment for
SDS factors, community factors, and the plurality of care/care
coordination. The commenters sought clarity on how the triggering of an
index episode and attribution of ACR to any particular MIPS eligible
clinician or group larger than 10 will be relevant. Other commenters
opposed the ACR measure due to concern that it is not risk adjusted by
severity level or tertiary care facility. The commenters were also
concerned that MIPS eligible clinicians and hospitals are trimming back
on SNF transfers to decrease bundled costs, increasing readmission
rates. Some commenters recommended using National Committee for Quality
Assurance's (NCQA's) ACR measure and not the ACR measure which is
specified for hospitals. Other commenters urged CMS to reconsider
requiring the use of the ACR measure, as they were concerned with the
reliability and validity levels associated with applying the measure to
a single clinician in a given year. They noted that the comparable
measure for hospitals requires a 3-year rolling average to mitigate
potential variability, and therefore, requested CMS explore assessing
the measure over a longer time period.
Response: We appreciate the commenters' concerns and suggestions.
However, we have examined the ACR measures closely and have decided to
finalize the ACR measure from the VM for groups with 16 or more
eligible clinicians, as part of the quality performance category for
the MIPS final score. Readmissions are a potential cause for patient
harm, and we believe it necessary to incentivize their reduction. We
believe measuring and holding MIPS eligible clinicians accountable for
readmissions is important for quality improvement, particularly given
the harm that patients face when readmitted. We hold hospitals and
post-acute care facilities accountable for readmissions as well;
holding all clinicians accountable for readmissions incentivizes better
coordination of care across care settings and clinicians.
We would like to explain that the all-cause hospital readmission
measure from VM uses 1 year of inpatient claims to identify eligible
admissions and readmissions, as well as up to 1 year prior of inpatient
data to collect diagnoses for risk adjustment. The measure reports a
single composite risk-standardized rate derived from the volume-
weighted results of hierarchical regression models for five specialty
cohorts. Each specialty cohort model uses a fixed, common set of risk-
adjustment variables. It is important to note a couple features of the
risk adjustment design developed for CMS by the Yale School of Medicine
Center for Outcomes Research & Evaluation (CORE). First, the ACR
measure involves estimating separate risk adjustment models for seven
different cohorts of medical professionals (general medicine, surgery/
gynecology, cardiorespiratory, cardiovascular, neurology, oncology, and
psychiatry because conditions typically cared for by the same team of
clinicians are likely to reflect similar levels of readmission risk.
The risk-adjusted readmission rates for each cohort that are then
combined into a single adjusted rate. Second, for each cohort, the risk
adjustment models control for age, principal diagnoses, and a broad
range of comorbidities (identified from the patient's clinical history
over the year preceding the index admission, not just at the time of
the hospitalization). Please note that the measure has been included
for the last several years in the Annual Quality Resource and Use
Reports so clinician groups and clinicians can find out how they
perform on the measure and use the data in the reports to improve their
performance. We will not apply the readmission measure to solo
practices or small groups (groups of 15 or less). We will apply the
readmission measure to groups of more than 15 who meet the case volume
of 200 cases. In addition, we continually reassess reliability and will
monitor MIPS eligible clinicians' performance under the MIPS for
unintended consequences.
It is important to note that for the VM Program, an index episode
for the readmission measure is triggered when a beneficiary who has
been attributed to a TIN is hospitalized with an eligible hospital
admission for the measure. Note that the index admission is not
directly attributed to a TIN as in the case of an episode for the
Medicare Spending per Beneficiary measure; rather, index admissions are
tied to the beneficiaries attributed to the TIN per the two-step
methodology. Regarding evidence for whether the measure incentivizes
reductions in readmissions, we refer readers to The New England Journal
of Medicine article available at http://www.nejm.org/doi/full/10.1056/NEJMsa1513024 which concluded that readmission trends are consistent
with hospitals' responding to incentives to reduce readmissions,
including the financial penalties for readmissions under the Affordable
Care Act. With respect to SDS factors, we refer readers to our
discussion above of the NQF's 2-year trial and ASPE's ongoing research.
We will continue to assess the measure's results and will consider the
commenter's feedback in the future.
Comment: Another commenter believed that global outcome measures
and population-based measures should not be included in the MIPS
quality score until there is further understanding of the reliability
of volume of measurement for 20 patients, assigning accountability to
the MIPS
[[Page 77134]]
eligible clinicians who have control, how conditions that are not
treated by the surgeon will be included or excluded, how population-
based measures will be used at the MIPS eligible clinician level, the
reliability and validity of measures if modified, the need for risk-
adjustment of the composite measures, if adjustments for safety data
sheets will be considered and the potential unintended consequences for
including resource utilization.
Response: We advocate the continued implementation of population-
based measures and will continue to work with stakeholders to improve
and expand them over time. We note that these measures have been used
in other programs, such as the Medicare Shared Savings Program and for
groups in the VM Program, and are aligned with the National Quality
Strategy.
Comment: Some commenters urged CMS to not maintain administrative
claims-based measures, which were developed for use at the community or
hospital level, and often result in significant attribution issues. The
commenters stated these measures tend to have low statistical
reliability when applied at the individual clinician level, and at
times at the group level. They are also calculated with little
transparency, which confuses and frustrates MIPS eligible clinicians.
The commenters stated that scores on these particular measures do not
provide actionable feedback to MIPS eligible clinicians on how they can
improve.
Response: We believe administrative claims-based measures are a
necessary option to minimize reporting burden for MIPS eligible
clinicians. The ACR measure has been used in both the Shared Savings
Program and the VM Program for several years. We would like to note
that at the minimum case sizes applied for the VM, average reliability
for the ACSC composite measures exceed 0.40 even for TINs with one EP.
We can understand why commenters see these measures as less
transparent and actionable compared to the PQRS process measures.
However, this is largely driven by risk adjustment and shrinkage (in
the case of the ACR measure), both of which are attempts to protect
clinicians from ``unfair'' outcomes, albeit at the cost of decreased
transparency. In the context of the QRURs, we have provided
supplementary tables to the QRUR containing patient level information
on admissions, including reason for admission (principal diagnosis) and
whether it was followed by an unplanned readmission, to support both
more transparency as well as actionability. We intend to work with MIPS
eligible clinicians and other stakeholders to continue improving
available measures and reporting methods for MIPS.
We continually reassess measures and this is why we have worked
with measure owner and stakeholders to improve the risk adjustment
methodology for these measures. In addition, we have used these
measures under the VM Program and have provided feedback to groups and
individual clinicians for the last several years. Further, we apply
case minimums to ensure measures are reliable for groups and individual
clinicians. The measures are outcome focused and are calculated on
behalf of the clinician using Medicare claims and other administrative
data. In addition, they are low burden with the goal for groups and
individual clinicians to invest in care redesign activities to improve
outcomes for patients where good ambulatory coordination reduces
avoidable admissions.
Comment: Another commenter had concerns about the proposal to
include population health and prevention measures for all MIPS eligible
clinicians, stating that some specialists and sub-specialists have no
meaningful responsibility for population or preventive services.
Response: We believe that all MIPS eligible clinicians, including
specialists and subspecialists, have a meaningful responsibility to
their communities, which is why we have focused on population health
and prevention measures for all MIPS eligible clinicians. Individuals'
health relates directly to population and community health, which is an
important consideration for quality measurement generally and MIPS
specifically. It is important to note that we are no longer requiring
two of the three population health measures and are only requiring the
ACR measure for groups of more than 15 instead of our proposed approach
of groups of 10 or more, assuming the case minimum of 200 cases has
been met, as discussed in section II.E.6. of this final rule with
comment period. If the case minimum of 200 cases has not been met, we
will not score this measure. Thus, the MIPS eligible clinician will not
receive a zero for this measure, but rather this measure will not apply
to the MIPS eligible clinician's quality performance category score. We
believe the ACR measure for groups of more than 15 is appropriate and
will provide meaningful measurement.
Comment: Another commenter opposed using the same attribution
method that was originally used for ACOs and is currently used for the
VM Program for CMS' proposal to score MIPS eligible clinicians on two
or three (depending on practice size) additional `global' or
`population based' quality measures to be gathered from administrative
claims data. The commenter believed these measures potentially hold
MIPS eligible clinicians, especially specialists such as
ophthalmologists, responsible for care they did not provide. The
measures--acute and chronic care composites and ACR--focus on the
delivery of primary care, which does not apply to ophthalmology or a
variety of other specialties. Therefore, specialists should be exempt
from these additional measures and evaluated only on the six measures
they choose to report.
Response: As noted above, the ACR and ACSC measures have been used
in both the Shared Savings Program and the VM Program for several
years. The ACR measure involves estimating separate risk adjustment
models for seven different cohorts of medical professionals (general
medicine, surgery/gynecology, cardiorespiratory, cardiovascular,
neurology, oncology, and psychiatry) because conditions typically cared
for by the same team of clinicians are likely to reflect similar levels
of readmission risk. The measure reports a single composite risk-
standardized rate derived from the volume-weighted results of
hierarchical regression models for five specialty cohorts. Each
specialty cohort model uses a fixed, common set of risk-adjustment
variables. We believe this measure is representative of most MIPS
eligible clinicians.
In addition, we have examined the global and population-based
measures closely and have decided to not finalize two of these measures
as part of the quality performance category score. Specifically, we are
not finalizing use of the acute and chronic composite measures of AHRQ
PQIs. We agree with commenters that additional enhancements need to be
made to these measures for inclusion of risk adjustment. We will,
however, calculate these measures for all MIPS eligible clinicians and
provide feedback for informational purposes as part of the MIPS
feedback.
Comment: Other commenters requested that if the three claims-based
measures were instead reported by a QCDR or quality registry and
included total patient population, regardless of payer, the MIPS
eligible clinicians' patient population would be better
[[Page 77135]]
represented and overall scores more accurate. The commenters also
believed this would reduce administrative burden on CMS for the
calculation of these metrics and beneficiary attribution. The
commenters believed that since this is calculated by CMS and represents
up to a third of the quality score, QCDRs and qualified registries
would have limited ability to give MIPS eligible clinicians insight
into their performances and provide benchmarking data back to MIPS
eligible clinicians throughout the year, assisting with clinician's
ability to judge how they are performing relative to other
organizations within the registry. The commenters noted that QCDRs and
qualified registries serve a critical component to MIPS eligible
clinicians, allowing them to receive more timely feedback on their
rates and how their rates compare to others using the same QCDR or
qualified registry, so when up to a third of the quality score is based
on data not calculated by the QCDR or qualified registry, it becomes
challenging for that entity to provide meaningful feedback and
benchmarking to the MIPS eligible clinicians on how they are performing
in the overall quality category, which amounts to 50 percent of their
MIPS final score.
Response: We appreciate the suggestion but we believe it is
important to use CMS claims data which we know to be valid and to
calculate these measures in the way with which providers are familiar,
at the outset of the MIPS program. We would consider future refinements
to the measure, including exploring how a registry or QCDR might be
able to participate in the claims-based measures' calculation.
Comment: Some commenters supported the inclusion of ACR measure
rates in the proposed global and population health measurement, and the
use of telehealth to achieve goals.
Response: We thank the commenters for their support. Regarding the
commenters reference to telehealth, we note telehealth can help to
support better health and care at the patient and population levels. As
indicated in the Federal Health IT Strategic Plan 2015-2020 (Strategic
Plan) which can be found at http://www.hhs.gov/about/news/2015/09/21/final-federal-health-it-strategic-plan-2015-2020-released.html#,
telehealth can further the goals of: transforming health care delivery
and community health; enhancing the nation's health IT infrastructure;
and, advancing person-centered and self-managed health.
Comment: Other commenters stated that population-based measures had
low statistical reliability for practice groups smaller than hospitals.
The commenters requested that specialists and small MIPS eligible
clinicians be exempt from reporting population-based measures. Another
commenter stated attributing population-based measure outcomes to
specific MIPS eligible clinicians is inappropriate. Further, the
commenter stated MIPS eligible clinicians should only be scored on
measures they choose within the quality performance category. A few
commenters requested that population-based measures be removed from
quality reporting, because these measures were developed for use in the
hospital setting and would be unreliable when applied at the individual
MIPS eligible clinician's level. Another commenter stated that global
and population-based measures (PQIs specifically) should not be used
until they were appropriately risk adjusted for patient complexity and
socio-demographic status.
Response: We have examined the global and population-based measures
closely and have decided to not finalize the acute and chronic
composite measures of AHRQ PQI. Therefore, we are no longer requiring
two of the three population health measures and are only requiring the
ACR measure for groups of more than 15 instead of our proposed approach
of groups of 10 or more, assuming the case minimum of 200 cases has
been met, as discussed in section II.E.6. of this final rule with
comment period. If the case minimum of 200 cases has not been met, we
will not score this measure. Thus, the MIPS eligible clinician will not
receive a zero for this measure, but rather this measure will not apply
to the MIPS eligible clinician's quality performance category score. We
believe the ACR measure for groups of more than 15 is appropriate and
will provide meaningful measurement. Therefore, we respectfully
disagree with the commenter's statement that MIPS eligible clinicians
should only be scored on measures they choose within the quality
performance category.
Comment: Some commenters did not want CMS to use global and
population-based measures for accountability. The commenters remarked
that CMS has not provided enough evidence that these measures have any
impact on quality. The commenters found global and population-based
measures confusing and frustrating because MIPS eligible clinicians
have no control over appropriate measures for accountability.
Response: The purpose of the global and population-based measures
is to encourage systemic health care improvements for the population
being served by MIPS eligible clinicians. We note further that we have
found the PQI measures to be reliable in the VM Program with a case
count of at least 20. As we noted in our proposal, we intend to
incorporate clinical risk adjustment for the PQI measures as soon as
feasible.
Comment: Other commenters supported the use of global and
population-based measures, and supported CMS's inclusion of the acute
and chronic composite measures and the ACR measure. A few commenters
supported the proposal to use population-based measures from the acute
and chronic composite measures and the ACR measure or AHRQ PQIs with a
minimum case size of 20 and urged CMS to add a clinical risk adjustment
as soon as feasible.
Response: We thank the commenters for their support.
Comment: A few commenters requested that the denominator for the
quality performance category be adjusted as appropriate to reflect the
inapplicability of the global and population-based measures to certain
MIPS eligible clinician's practices (the commenter specifies that these
measures are inappropriate for hospitalists). Another commenter
requested population-based measures be removed from quality reporting,
because these measures were developed for use in the hospital setting
and would be unreliable when applied at the individual MIPS eligible
clinicians' level. Other commenters stated that global and population-
based measures (PQIs specifically) should not be used until they were
appropriately risk adjusted for patient complexity and socio
demographic status.
Response: We believe these measures are important for all MIPS
eligible clinicians, because their purpose is to encourage systemic
health care improvements for the population being served by MIPS
eligible clinicians. We believe that hospitalists are fully capable of
supporting that objective. Additionally, we are using the same two-step
attribution methodology that we have adopted in the VM Program, and
that methodology focuses on the delivery of primary care services both
by MIPS eligible clinicians who work in primary care and by
specialists.
Comment: Some commenters expressed support for including more
global, population-based measures that are not specialty-specific or
limited to addressing specific conditions in the program, but noted
that the level of accountability for population-based measures is best
at the health system and community level--where the numbers are large
enough--rather than at the MIPS eligible clinician level.
[[Page 77136]]
Response: We thank the commenters for the feedback. We will take
the suggestions into consideration in future rulemaking.
Comment: Another commenter believed that the population-based
measures included in the proposal were appropriate for population
measurement, but could go further with respect to measuring outcomes.
One commenter outlined necessary readmission scenarios to prevent graft
rejection for transplant patients and urged CMS to remove the
population-based measures, which indirectly include hospital
readmissions, from consideration under the quality component of MIPS.
Response: We believe the ACR measure for groups of more than 15 is
appropriate and will provide meaningful measurement. Please refer to
the discussion above regarding the ACR measure. In addition, we have
examined the global and population-based measures closely and have
decided to not finalize the acute and chronic composite measures of
AHRQ PQIs.
Comment: Several commenters recommended that CMS not require the
submission of administrative claims-based population-based measures and
stated that they tend to have low reliability at both the MIPS eligible
clinicians individual and group levels. The commenters recommended that
CMS make the measures optional in the improvement activities
performance category or exempt small practices from the measures.
Response: We believe that claims-based measures are sufficiently
reliable for value-based purchasing programs, including MIPS. We note
that the quality measures and improvement activities are not
interchangeable. We will consider other measures that could potentially
replace claims-based measures in the future. We note that the
administrative claims-based population-based measures are calculated
based on Part B claims, and are not separately submitted by MIPS
eligible clinicians, so do not have administrative burden associated
with them.
Comment: Other commenters expressed concern that the proposal
included administrative claims-based population-based measures that
were previously part of the VM Program because these measures are
specified for the inpatient and outpatient hospital setting and are
less reliable when applied to individual MIPS eligible clinicians and
groups. The commenters requested CMS decrease the threshold levels for
quality reporting measures, expand exemptions, and develop payment
modifier measures that have a higher reliability at the MIPS eligible
clinician level. Another commenter had concerns about taking measures
from other organizational settings (for example, hospitals) for MIPS as
the underlying theory and concepts, technical definitions, and
parameters of use might be different in different contexts.
Response: We would like to explain that some measures are geared
toward facilities and some are attributable to individuals. Please
refer to the Table A of the Appendix in this final rule with comment
period for the applicable measures. We have worked to adopt only MIPS
eligible clinician individual or group-based measures in the MIPS
program.
Comment: Another commenter recommended aligning measures for
hospitals and hospitalists and limiting those measures to the quality
performance category. The commenter further recommended maintaining the
voluntary application of hospital measures (specifically those that
could reflect the influence of hospitalists) to MIPS eligible
clinicians. Some commenters encouraged CMS to align quality measures
with current hospital measures because hospital staff require time and
effort to maintain and report MIPS and APM data due to small staffing
levels. The commenters stated aligning hospital and MIPS eligible
clinician measures would reduce potential for reporting error and allow
them to pursue common goals to improve quality of care delivery.
Another commenter recommended that hospital, ACO, and pay for
performance data be used to measure MIPS performance.
Response: We appreciate the commenter's feedback and will consider
it in future years of the program.
After consideration of the comments regarding our proposal on
global and population-based measures we are not finalizing all of these
measures as part of the quality score. Specifically, we are not
finalizing our proposal to use the acute and chronic composite measures
of AHRQ PQIs. We agree with commenters that additional enhancements,
including the addition of risk adjustment, needed to be made to these
measures prior to inclusion in MIPS. We will, however, calculate these
measures for all MIPS eligible clinicians and provide feedback for
informational purposes as part of the MIPS feedback.
Lastly, we are finalizing the ACR measure from the VM Program as
part of the quality measure domain for the MIPS total performance
score. We are finalizing this measure with the following modifications
as proposed. We will not apply the ACR measure to solo practices or
small groups (groups of 15 or less). We will apply the ACR measure to
groups of 16 or more who meet the case volume of 200 cases. A group
would be scored on the ACR measure even if it did not submit any
quality measures, if it submitted in other performance categories.
Otherwise, then the group would not be scored on the readmission
measure. In our transition year policies, the readmission measure alone
would not produce a neutral to positive MIPS payment adjustment since
in order to achieve a neutral to positive MIPS payment adjustment, a
MIPS eligible clinician or group must submit information to one of the
three performance categories as discussed in section II.E.7. of the
final rule with comment period. In addition, the ACR measure in the
MIPS transition year CY 2017 will be based on the performance period
(January 1, 2017, through December 31, 2017). However, for MIPS
eligible clinicians who do not meet the minimum case requirements the
ACR measure is not applicable.
c. Selection of Quality Measures for Individual MIPS Eligible
Clinicians and Groups
(1) Annual List of Quality Measures Available for MIPS Assessment
Under section 1848(q)(2)(D)(i) of the Act, the Secretary, through
notice and comment rulemaking, must establish an annual list of quality
measures from which MIPS eligible clinicians may choose for purposes of
assessment for a performance period. The annual list of quality
measures must be published in the Federal Register no later than
November 1 of the year prior to the first day of a performance period.
Updates to the annual list of quality measures must be published in the
Federal Register not later than November 1 of the year prior to the
first day of each subsequent performance period. Updates may include
the removal of quality measures, the addition of new quality measures,
and the inclusion of existing quality measures that the Secretary
determines have undergone substantive changes. For example, a quality
measure may be considered for removal if the Secretary determines that
the measure is no longer meaningful, such as measures that are topped
out. A measure may be considered topped out if measure performance is
so high and unvarying that meaningful distinctions and improvement in
performance can no longer be made. Additionally, we are not the measure
steward for most of the proposed quality measures available for
[[Page 77137]]
inclusion in the MIPS annual list of quality measures. We rely on
outside measure stewards and developers to maintain these measures.
Therefore, we also proposed to give consideration to removing measures
that measure stewards are no longer able to maintain.
Under section 1848(q)(2)(D)(ii) of the Act, the Secretary must
solicit a ``Call for Quality Measures'' each year. Specifically, the
Secretary must request that eligible clinician organizations and other
relevant stakeholders identify and submit quality measures to be
considered for selection in the annual list of quality measures, as
well as updates to the measures. Although we will accept quality
measures submissions at any time, only measures submitted before June 1
of each year will be considered for inclusion in the annual list of
quality measures for the performance period beginning 2 years after the
measure is submitted. For example, a measure submitted prior to June 1,
2016 would be considered for the 2018 performance period. Of those
quality measures submitted before June 1, we will determine which
quality measures will move forward as potential measures for use in
MIPS. Prior to finalizing new measures for inclusion in the MIPS
program, those measures that we determine will move forward must also
go through notice-and-comment rulemaking and the new proposed measures
must be submitted to a peer review journal. Finally, for quality
measures that have undergone substantive changes, we propose to
identify measures including but not limited to measures that have had
measure specification, measure title, and domain changes. Through NQF's
or the measure steward's measure maintenance process, NQF-endorsed
measures are sometimes updated to incorporate changes that we believe
do not substantively change the intent of the measure. Examples of such
changes may include updated diagnosis or procedure codes or changes to
exclusions to the patient population or definitions. While we address
such changes on a case-by case basis, we generally believe these types
of maintenance changes are distinct from substantive changes to
measures that result in what are considered new or different measures.
In the transition year of MIPS, we proposed to maintain a majority
of previously implemented measures in PQRS (80 FR 70885-71386) for
inclusion in the annual list of quality measures. These measures could
be found in Table A of the Appendix of the proposed rule: Proposed
Individual Quality Measures Available for MIPS Reporting in 2017 (81 FR
28399 through 28446). Also included in the Appendix in Table B of the
proposed rule (81 FR 28447) was a list of proposed quality measures
that do not require data submission, some of which were previously
implemented in the VM (80 FR 71273-71300), that we proposed to include
in the annual list of MIPS quality measures. These measures can be
calculated from administrative claims data and do not require data
submission. We also proposed measures that were not previously
finalized for implementation in the PQRS program. These measures and
their draft specifications are listed in Table D of the Appendix in the
proposed rule (81 FR 28450 through 28460). The proposed specialty-
specific measure sets are listed in Table E of the Appendix in the
proposed rule (81 FR 28460 through 28522). As we continue to develop
measures and specialty-specific measure sets, we recognize that there
are many MIPS eligible clinicians who see both Medicaid and Medicare
patients and seek to align our measures to utilize Medicaid measures in
the MIPS quality performance category. We believe that aligning
Medicaid and Medicare measures is in the interest of all clinicians and
will help drive quality improvement for our beneficiaries. For future
years, we solicited comment about the addition of a ``Medicaid measure
set'' based on the Medicaid Adult Core Set (https://www.medicaid.gov/medicaid-chip-program-information/by-topics/quality-of-care/adult-health-care-quality-measures.html). We also sought to include measures
that were part of the seven core measure sets that were developed by
the Core Quality Measures Collaborative (CQMC). The CQMC is a
collaborative of multiple stakeholders that is convened by America's
Health Insurance Plans (AHIP) and co-led with CMS. The purpose of the
collaborative is to align measures and develop consensus on core
measure sets across public and private payers. Measures we proposed for
removal can be found in Table F of the Appendix in the proposed rule
(81 FR 28522 through 28531) and measures that will have substantive
changes for the 2017 performance period can be found in Table G of the
Appendix in the proposed rule (81 FR 28531 through 28569). In future
years, the annual list of quality measures available for MIPS
assessment will occur through rulemaking. We requested comment on these
proposals. In particular, we solicited comment on whether there are any
measures that commenters believe should be classified in a different
NQS domain than what was proposed or that should be classified as a
different measure type (for example, process vs. outcome) than what was
proposed.
The following is a summary of the comments we received on our
proposals regarding the Annual List of Quality Measures Available for
MIPS Assessment.
Comment: One commenter wanted to know via what mechanism
stakeholders will be made aware of the public comment period and final
measure publications associated with quality measure changes under MIPS
(for example, the PFS rule) in advance of the proposed annual update,
and if CMS plans to do measure updates specific to MIPS. Another
commenter requested clarity on when the measures and measure sets will
be released.
Response: The final measure sets can be found in the Appendix of
this final rule with comment period. We intend to make updates to the
list of quality measures annually through future notice and comment
rulemaking as necessary. At this time, we cannot provide more
specificity on our rulemaking schedule, but intend to announce
availability of the proposed and final measure sets through stakeholder
outreach, listservs, online postings on qualitypaymentprogram.cms.gov,
and other communication channels that we use to disseminate information
to our stakeholders.
Comment: One commenter asked that all measures be published in a
sortable electronic format, such as MS Excel or a comma-delimited
format compatible with Excel.
Response: We intend to post the measures and their specifications
on the Quality Payment Program Web site
(qualitypaymentprogram.cms.gov). We are striving to design the Web site
with user needs in mind so that users will have easy access to the
information that they need.
Comment: One commenter requested clarification on the methodology
for publishing, reviewing, benchmarking, and giving feedback on
measures.
Response: As discussed in section II.E.5.c. of this final rule with
comment period, we select measures through a pre-rulemaking process,
which includes soliciting public comments, and adopt those measures
through notice-and-comment rulemaking. We then collect measure data,
establish performance benchmarks based on a prior period or the
performance period, score MIPS eligible clinicians based on their
performance relative to the benchmarks, and provide feedback to MIPS
eligible clinicians on their performance. Also, as
[[Page 77138]]
discussed further in section II.E.10. of this final rule with comment
period, we intend to publicly post performance information on the
Physician Compare Web site.
Comment: One commenter requested that any proposed introduction of
additional inpatient or hospital measures be published in the same
place that other MIPS quality measure proposed changes are published.
Response: We agree with the commenter and will strive to ensure
that all MIPS policy changes occur together. However, other rulemaking
vehicles may be necessary for the Program's implementation in the
future.
Comment: One commenter did not support the Quality Payment Program,
believing quality measures should be developed on a state level by the
physicians in the state.
Response: The Quality Payment Program is required by statute. In
addition, we note that the vast majority of the measures that are being
finalized were developed by the physician community.
Comment: A few commenters cautiously supported the proposal that
CMS release measures by November 1 the year in advance of the
performance period, noting that ideally physicians would have more
time. However, numerous commenters stated that November 1 is too late
in the year for quality measures to be published in the Federal
Register to be implemented by January 1 of the following year and
encouraged CMS to publish the final list of approved measures earlier
to allow clinicians and vendors sufficient time to prepare for the
performance period. A few commenters specifically noted the need to
give EHR software vendors adequate time to update their software and
establish workflows to match measures. This process takes several
months, and many vendors do not update their systems with new measures
until June.
Response: We understand the commenters' concern. As described
above, the process for selecting MIPS quality measures entails multiple
steps that begins with an annual call for measures and culminates with
the publication of the annual list of quality measures in a final rule.
While we strive to release the final list of quality measures as soon
as feasible, we cannot do so until we have completed all of the
requisite steps. With respect to commenters' statement that software
developers need more adequate time to update their software to capture
measures, we will work to assure that measures have been appropriately
reviewed and release measures as early as possible. In future years,
CMS will release specifications for eCQMs well in advance of November 1
of the year preceding a given performance period. For example, for the
2017 performance period, we released specifications for all eCQMs that
may be considered for implementation into MIPS in April 2016. We are
open to commenters' suggestions for other ways that we can streamline
the measure selection process to enable us to release the annual list
of quality measures and/or measure specifications sooner than November
1st.
Comment: A few commenters were concerned with CMS's plan to update
quality measures on a yearly basis. The commenters recommended that
measures be considered in ``test/pilot'' mode before they are included
in CMS's quality programs and rigorously evaluated for validity and
accuracy during the pilot period. Further, the commenters suggested
that measures should be maintained for more than 1 year, to ensure the
agency has a reasonable understanding of how clinicians have performed
and improved over time, as well as to determine whether CMS's
priorities have been reasonably met, with respect to included quality
measures.
Response: For measures that are NQF-endorsed, measures must be
tested for reliability and validity. For measures that are not NQF-
endorsed, we consider whether and to what extent the measures have been
tested for reliability and validity. We do not take the decision to
remove a measure lightly and agree with the commenters that we should
take into consideration how clinicians have performed and improved over
time, among other factors, when deciding to remove a quality measure
from the program.
Comment: Several commenters recommended separate timelines for new
measures as opposed to updated specifications and suggested that when
changes to the list of MIPS quality measures are made, those changes
should not be implemented until at least 18 months after they are
announced and finalized. One commenter suggested that 12 months are
needed for vendor implementation, and another 6 months allocated for
real-world beta testing of measures to identify and resolve defects and
inconsistencies in a measure update for implementation the following
year. The commenter further requested a minimum of 6 months' notice
prior to any reporting period for implementation of revised measures.
Some commenters recommended more time, at least 6 months, to implement
a new metric before being scored to allow time to work out reporting
issues with vendors. Other commenters requested that specific measure
definitions be published at least 120 days prior to the start of the
reporting period.
Response: We do not believe it is necessary to develop unique
timelines for measures that we will consider for the program. Although
we understand the commenters' point that new measures require
additional consideration beyond simple changes to measure
specifications, we believe we account for those considerations when
developing our proposals and in consulting with the stakeholder
community during the measure development process. We describe our
process in detail in our Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf).
Comment: One commenter expressed discontent with measures
specifications that change in mid-season. The commenter requested that
the measures be accepted based on the new or the old specifications and
that neither submission be scored.
Response: We would like to note that measure specifications do not
change during the performance period. Prior to the beginning of the
performance period, measure specifications are shared, and only change
for the next performance period or at another time indicated in
rulemaking. We cannot accept multiple versions of quality measure data,
so we can only accept one version of a measure's specifications during
a performance period.
Comment: One commenter requested that CMS quickly notify clinicians
when measures are introduced and retired. Further, other commenters
were concerned about the proposed changes in quality measures. The
commenters stated that this will require more resources and time to
sort through all the changes.
Response: We agree and will make every possible effort to notify
clinicians when we propose and adopt measures for MIPS, and will
similarly notify clinicians as quickly as possible if and when we
retire measures from the program, which is also done through
rulemaking. Our intention is to keep clinicians as informed as possible
about the quality criteria on which they will be measured, something we
have done within the PQRS and other quality reporting programs.
Comment: One commenter recommended that to avoid concerns regarding
uneven opportunities for
[[Page 77139]]
clinicians, registries, and health IT vendors, CMS should require all
measures planned for inclusion in its quality reporting programs to
include specifications such that any organization that would want to
use those measures may do so.
Response: Measure specifications will be available on the Quality
Payment Program Web site (qualitypaymentprogram.cms.gov). Additionally,
to provide clarity to MIPS eligible clinicians when they select their
quality measures we also will publish the numerical baseline period
benchmarks prior to the performance period (or as close to the start of
the performance period as possible) in the same location as the
detailed measure specifications. These measure benchmarks will be
published for those quality measures for which baseline period data is
available. For more details on our quality performance category
benchmarks, please refer to section II.E.6. of this final rule with
comment period.
Comment: One commenter recommended that CMS implement a review
process when it considers measures for use at a different level than
the measure's intended use (for example, the clinician level). The
commenter recommended this process include, but not be limited to:
Convening a technical expert panel and a public comment period, and a
review of measure specifications to ensure measures are feasible and
scientifically acceptable in all environments and at all intended
levels of measurement.
Response: As part of our measure selection process, stakeholders
have multiple opportunities to review measure specifications and on
whether or not they believe the measures are applicable to clinicians
as well as feasible, scientifically acceptable, and reliable and valid
at the clinician level. As we discussed in section II.E.5.c of this
final rule with comment period, the annual Call for Measures process
allows eligible clinician organizations and other relevant stakeholder
organizations to identify and submit quality measures for
consideration. Presumably, stakeholders would not submit measures for
consideration unless they believe that the measure is applicable to
clinicians and can be reliably and validly measured at the individual
clinician level. The NQF convened Measure Application Partnership (MAP)
provides an additional opportunity for stakeholders to provide input on
whether or not they believe the measures are applicable to clinicians
as well as feasible, scientifically acceptable, and reliable and valid
at the clinician level. Furthermore, we must go through notice and
comment rulemaking to establish the annual list of quality measures,
which gives stakeholders an additional opportunity to review the
measure specifications and provide input on whether or not they believe
the measures are applicable to clinicians as well as feasible,
scientifically acceptable, and reliable and valid at the clinician
level. Additionally, we are required by statute to submit new measures
to an applicable, specialty-appropriate peer-reviewed journal.
Comment: Several commenters suggested providing a 3-year phase out
period for measures being proposed for removal. CMS should provide
measure owners with more detailed analysis on the use of their measures
so that they can work to develop the next generation of measures and/or
improve performance with measures.
Response: We allow the public to comment on any proposals for
measure removals, but we do not intend to adopt a general 3-year phase-
out policy at this time. We believe the MIPS program must be flexible
enough to accommodate changes in clinical practice and evidence as they
occur.
Comment: A few commenters commended and supported CMS for its
proposal to remove unneeded measures and reduce administrative burden
while still providing meaningful rewards for high quality care provided
by MIPS eligible clinicians in small practices. Commenters recommended
that CMS remove topped out measures, duplicative measures, and measures
of basic standards of care. Another commenter suggested that CMS
establish a mechanism for expeditiously changing quality measures that
are no longer consistent with published best practices. Further,
another commenter noted that patients are better served when eligible
clinicians are able to dedicate their time and effort to recording data
that is pertinent and specific to patient issues and care, and thus,
the commenter recommended that CMS remove irrelevant quality measures
and redundant quality measures in order to align MIPS eligible
clinicians with CMS' goal to improve reporting efficiency.
Response: We intend to ensure that measures are not duplicative,
and we believe that the need for some measures of basic care standards
is still present given the clinical gaps evidenced by the performance
rate. Measures must be removed through notice-and-comment rulemaking
and are thus not expeditiously removed. Measures are reviewed in
accordance with the removal criteria discussed in the proposed rule (81
FR 28193) and a determination is made to retain or to propose for
removal.
Comment: A few commenters opposed removing measures as topped out,
stating that high performance on a measure should be rewarded and
incentivized. Other commenters recommended that CMS consider adopting
new measures addressing similar concepts to ensure that there are no
gaps in measurement in distinct disease areas before removing topped
out measures.
Response: We agree that we should not automatically remove measures
that are topped out without considering other factors, such as whether
or not removing the measure could lead to a worsening performance gap.
We consider additional factors when removing measures on the basis of
being ``topped out.'' For instance, if the variance of performance on
the measure indicates that there is no identified clinical performance
gap, this also impacts the decision to remove measures on the basis of
being ``topped out.'' We will continue to look at topped out criteria
in addition to performance gaps when selecting measures to remove. We
recognize that topped out measures no longer provide information that
permits the meaningful comparison of clinicians.
Comment: One commenter did not support the selection of quality
measures, as the commenter believed the quality measures are surrogates
for measuring true value as a clinician and lack validity.
Response: We believe quality measurement is critical to ensuring
that Medicare beneficiaries and all patients receive the best care at
the right time. We note further that we are required by statute to
collect quality measures information, and we believe quality
measurement is an opportunity for MIPS eligible clinicians to
demonstrate the quality of care that they provide to their patients.
Comment: One commenter proposed that instead of the list of self-
selected quality measures, CMS could establish a measure set that the
agency could calculate on behalf of clinicians using administrative
claims, QCDR data, and potentially other clinical data that clinicians
report with their claims or through EHRs. These administrative claims-
based measures should include some measures that apply to a broad scope
of clinicians, and also some overuse measures (for example, imaging for
non-specific low back pain). Further, the commenter suggested that CMS
also could include measures from other
[[Page 77140]]
settings, such as inpatient hospitals, because some clinicians, such as
hospitalists, may be best measured through hospital quality measures
(for example, hospital readmissions). The commenter also suggested that
through this approach CMS also would have more complete information to
remove topped-out measures, and to prioritize measures based on
performance gaps.
Response: We note that we proposed three administrative claims-
based measures, and that we do accept information electronically and
through QCDRs. We are researching the best way to attribute care to
clinicians within facilities. We are also looking into the best method
to identify topped-out measures and to quantify a decision to remove
measures from the program. Finally, measures have been identified based
on specialty.
Comment: Numerous commenters disagreed with the elimination of
measures group reporting and asked that CMS reconsider the removal of
measures groups, in order to reduce reporting burden. Further,
commenters noted that measures groups are designed to provide an
overall picture of patient care for a particular condition or set of
services and provide a valuable means of reporting on quality. Measure
groups ensure that specialties, individual physicians, and small
practices have access to meaningful measures that allow physicians to
focus on procedures and conditions that represent a majority of his or
her practice. Another commenter expressed belief that the removal of
measure groups will skew quality reporting further in favor of large
group practices because the CMS Web Interface allows for reporting on a
sampling of patients.
Response: We agree that there are measures to which specialists
should have access to that are meaningful for their specialty, which is
why we proposed replacing measure groups with specialty measure sets to
ensure simplicity in reporting for specialists. We believe that the
specialty measure sets are a more appropriate way for MIPS to
incorporate measures relevant to specialists than measures groups.
Further, we proposed specialty measures sets in an effort to align with
the CQMC.
Comment: One commenter agreed with efforts to streamline the
process of reviewing and identifying applicable quality measures, and
supported the inclusion of specialty measure sets in Table E of the
Appendix in this final rule with comment period.
Response: We appreciate the support.
Comment: One commenter encouraged CMS to move rapidly to a core set
of measures by specialty or subspecialty because the commenter believes
an approach using high-value measures would enable direct comparison
between similar clinicians, and would provide assurance that the
comparison is based on a consistent and sufficiently comprehensive set
of quality indicators. The commenter believed a core measure set should
include measures of outcomes, appropriate use, patient safety,
efficiency, patient experience, and care coordination.
Response: We agree that a core set of measures by specialty would
be optimal when comparing similar eligible clinicians and we did
incorporate the measures that were included in the core sets developed
by the CQMC. CMS will continue to evaluate a core set of measures by
specialty to ensure each set is diverse and indicative of CMS
priorities of quality care.
Comment: One commenter recommended use of specialty- and
subspecialty-specific core measure sets that would provide reliable
comparative information about clinician performance than the 6 measure
approach. The commenter believed that advancing the current state of
performance measurement should be a top priority in MACRA
implementation, and toward that end, the commenter supported using the
improvement activities category to reward development of high-value
measures, and in particular patient-reported outcomes.
Response: We will consider any new measure sets in the future, and
welcome commenters' and other stakeholders' feedback on what measure
sets we should consider in the future for MIPS. We agree that advancing
performance measurement should be a top priority for MIPS, and we thank
the commenter for their support of improvement activities.
Comment: One commenter recommended identifying quality measures
that are specialty specific and germane to what is practiced. Another
commenter recommended that CMS apply a standardized approach to ensure
that measures included in the specialty measure sets are clinically
relevant and aligned with updates occurring in the measure landscape.
Response: We appreciate the comment and note that identification of
quality measures that are germane to clinical practice is our intent.
We are adopting quality measure sets that are specialty-specific and
clinically relevant to that particular specialty.
Comment: Several commenters supported the concept of measure sets,
but had some concerns with the construction of the proposed measure
sets. Some of the measures included in the specialty sets are not
appropriate for some specialties or subspecialties. The commenters
believed the proposed rule represents more of a primary care practice
focus. Further, the commenters were concerned that reporting
requirements may not always reflect real differences in specialized
practices. Commenters suggested these issues reflect a need that all of
the measure sets should be more closely vetted by clinicians from the
specialty providing the service.
Response: We worked with specialty societies to develop measure
sets and will continue to work with specialty societies to further
improve the existing specialty measure sets and also develop new
specialty measure sets for more specialty types.
Comment: Some commenters believed the quality measures are not
relevant to certain specialties. Further, one commenter expressed
concern about the proposed MIPS quality measures because the commenter
believed the quality measures do not reflect the unique care provided
by geriatricians for their elderly patients, but rather were developed
for non-elderly patient care. The commenter believed this would
unfairly disadvantage geriatricians who care for sicker, older
patients; who are without the resources and technology incentives to
develop new, more relevant measures, and frequently practice in
settings that do not have health IT infrastructure.
Response: We believe that the quality measures adopted under the
Quality Payment Program are relevant to clinicians that offer services
to Medicare beneficiaries, including elderly patients. We tried to
align certain measures to specialty-specific services, and we welcome
commenters' feedback on additional measures or specialties that we
should consider in the future.
Comment: A few commenters stated that not every physician and
specialty fits CMS's measure molds and that there is a lack of
specialty measure sets. Further, commenters suggested that CMS identify
an external stakeholder entity to maintain the proposed specialty-
specific measure sets.
Response: We have identified specialty sets based on the ABMS
(American Board of Medical Specialties) list. Although we realize that
all specialties or sub-specialties are not covered under these
categories, we encourage clinicians to report measures that are most
relevant to their practices, including those that are not within a
specialty set.
[[Page 77141]]
Comment: A few commenters stated that specialists with fewer
options will be required to report on topped out measures which do not
award full credit, resulting in a disadvantage. Another commenter was
concerned that as groups choose the six quality measures on which they
perform best, those popular measures will become inflated and quickly
become ``topped out.'' Further, commenters stated that there is little
value in reporting on measures already close to being ``topped out,''
just for the sake of reporting. One commenter suggested that CMS
continue to develop more clinically relevant measures and remove those
that have been topped out.
Response: As measures become topped out, we will review each
measure and make a determination to retain or remove the measure based
on several factors including whether the measure is a policy priority
and whether its removal could have unintended impact on quality
performance. We refer the commenters to section II.E.6.a. of this final
rule with comment period for additional details on our approach for
identifying and scoring topped out measures.
Comment: One commenter suggested that CMS carefully consider all of
the specialties that will be engaged in the MIPS program in future
years as measure requirements are expanded and to develop policies that
provide flexibility for those physician types who may have limited
outcomes measures to report. Another commenter recommended CMS ensure
the availability of high priority MIPS quality measures for
specialists. The commenter requested that CMS closely track whether the
number of high priority MIPS measures available to specialists
approximates the number available to primary care physicians. Should
the measures available to specialists be considerably lower, they
recommended that CMS expedite the creation of specialty specific high
priority measures within its measure development process to assure
parity in reporting opportunity across specialties.
Response: We are aware of the limitations in the pool of measures,
and we will continue to work with stakeholders to include more measures
for specialties without adequate metrics.
Comment: One commenter stated that it is difficult to evaluate the
long-term negative impact the proposed rule may have because there was
no information on how CMS intends to incorporate new measures into the
quality category. Commenter encouraged information sharing on the
intended process to evaluate newly proposed measures.
Response: As part of the PQRS Call for Measures process, we have
historically outlined the criteria that we will use to evaluate measure
submissions. We anticipate continuing to do so for the annual MIPS Call
for Measures process as well. To the extent measures that are submitted
under the annual Call for Measures process meet these criteria, we
would then propose to include them in the MIPS quality measure set
through notice and comment rulemaking.
Comment: A few commenters supported continued use of PQRS measures.
In addition, one commenter acknowledged and expressed appreciation for
CMS's addition of a comprehensive list of measures.
Response: We thank the commenters for their support and believe
that the continued use of PQRS measures will help ease the transition
into MIPS for many MIPS eligible clinicians. Further, the statute
provides that PQRS measures shall be included in the final measure list
unless removed.
Comment: Some commenters requested evidence based measures that are
proven to improve quality of care, improve outcomes, and/or lower the
cost of care. Further, they stressed that CMS must continue to improve
measures for greater clinical relevance, clinical and patient centered
measures, and avoid unintended consequences. A few commenters stated
that the PQRS measures have no relevance or benefit to their practice.
In addition, one commenter stated that the majority of PQRS measures do
not show an evidence-based rationale or justify implementation.
Response: We believe that the measures that we have adopted fulfill
the goals the commenters suggest. We further believe that any metrics
that capture activities beyond the clinician's control reflect systemic
quality improvements to which MIPS eligible clinicians contribute. We
note further that most measures that are being implemented have gone
through consensus endorsement by a third-party reviewing organization
(NQF) prior to their adoption. As part of this endorsement process, the
measures are evaluated for validity, reliability, feasibility,
unintentional consequences, and expected impact on clinician quality
performance. Furthermore, MIPS eligible clinicians also have the option
of working with QCDRs to submit measures that are not included in the
MIPS measure set but that may be more appropriate for their practices.
Comment: A few commenters expressed concern about the robustness of
the proposed quality measures. The commenters thought that many of the
measures lack demonstrated improvement in patient care, create
administrative burden for the eligible clinician to track, and will not
capture quality of care provided.
Response: Most of the CMS measures are submitted by measure
stewards and owners from the medical community. We continue to
encourage stakeholders to submit measures for consideration during our
annual call for measures. Further, we realize that measures are not the
only indication of quality care. However, they are one objective way to
assess quality of care patients receive. We believe this indicator will
become more effective and reliable as the measure set is expanded and
refined over the years.
Comment: One commenter stated that none of the 465 options for
reporting measures in the proposed rule are based on scientific method.
They recommended that each of the 465 options should meet three
criteria. First, it should be based on scientific method. Second, there
should be a plan to review and act on the data that is reported to CMS
on the measure. Third, the reporting of such quality measures should be
an automated function of the electronic medical record system and not
impair, slow down or distract physicians participating directly in
patient care.
Response: As stated previously, most of the proposed measures have
been endorsed by the NQF. The endorsement process evaluates measures on
scientific acceptability, among other criteria. Depending on the policy
priority of the measure, CMS may include measures without NQF
endorsement. All of our measures, regardless of endorsement status, are
thoroughly reviewed, undergo rigorous analysis, presented for public
comment, and have a strong scientific and clinical basis for inclusion.
Comment: One commenter indicated that many proposed measures have
not been tested, the proposed thresholds for reliability and validity
are very low, and the proposed rule does not provide specific benchmark
for measures. The commenter recommended extra time to test and
implement measures across programs, with an emphasis on simplicity,
transparency and appropriate risk-adjustment.
Response: Most MIPS measures are NQF-endorsed, which means they
have been evaluated for feasibility, reliability, and validity, or in
the absence of NQF-endorsement, the measures are required to have an
evidence-based focus. All of our measures, regardless of endorsement
status, are thoroughly reviewed,
[[Page 77142]]
undergo rigorous analysis, presented for public comment, and have a
strong scientific and clinical basis for inclusion. In addition, as
discussed in section II.E.6. of this final rule with comment period, we
intend to publish measure-specific benchmarks prior to the start of the
performance period for all measures for which prior year data are
available.
Comment: One commenter recommended rigorous review and updating of
quality measures, including addressing how measures are related to
outcomes.
Response: CMS does annual reviews of all measures to ensure they
continue to be clinically relevant, appropriate, and evidence based. In
the event that we determine that a measure no longer meets these
criteria, then we may consider removing them from the MIPS quality
measure set for future years through notice and comment rulemaking.
Comment: One commenter asked CMS to offer time-limited adoption for
any MIPS measures that are not fully tested and have not been through a
rigorous vetting process, as this offers four benefits: MIPS eligible
clinicians will have expedited access to a greater selection of
measures; measure developers could have access to a larger data set for
measure testing; we will gain earlier insight into appropriateness and
relevance of such measures; and MIPS eligible clinicians will gain
valuable experience with the measures before performance benchmarks are
established.
Response: We believe that we must ensure that all MIPS measures are
clinically valid and tested prior to their use in a value-based
purchasing program. All of our measures, are thoroughly reviewed,
undergo rigorous analysis, presented for public comment, and have a
strong scientific and clinical basis for inclusion including testing
for validity, reliability, feasibility, unintentional consequences, and
the expected impact on clinician quality performance.
Comment: One commenter supported the Quality Payment Program
rewarding MIPS eligible clinician performance as measured by quality
metrics, but expressed concern that there are few outcomes measures,
particularly regarding assessment of quality of care provided across
settings and providers, linking clinical quality and efficiency to a
team. The commenter recommended the Quality Payment Program develop and
include quality measures that reflect performance of eligible
clinicians as part of a team, perhaps through composite measure groups,
which would take into account various components of quality that move
toward the desired outcome. Alternatively, or in addition to such a
measure, the commenter recommended that CMS work toward establishing
clear associations between the clinician level measures in MIPS,
facility level measures in the Hospital OQR and other provider level
measures such as home health agency measures, so that all clinicians
could see how one set of quality activities feeds into another, thus
driving improvement across settings and providers for a given
population.
Response: We would encourage the commenter to submit measures for
possible inclusion under MIPS through the Call for Measures process.
Further, it may be advantageous for the commenter to report through a
QCDR or report as a group. We are committed to developing outcome
measures and intend to work with interested stakeholders through our
Quality Measurement Development Plan which describes our approach.
Comment: One commenter requested that the requirement for measures
be reduced to encourage meaningful engagement and improvement in
patient care. The current set of measures are not relevant to all
clinicians, especially given the diversity of procedures, patient
population and geographic location of clinicians. The commenter also
believes that the quality measures do not align with the advancing care
information, cost or improvement activities performance categories, and
recommended alignment of quality and cost measures to provide
information needed to increase value.
Response: We have worked to adopt numerous measures that apply to
as many clinicians as possible, and we have specified in other sections
of this final rule with comment period how clinicians with few or no
measures applicable to their practice will be scored under the program.
We believe that the measures we are adopting will encourage meaningful
engagement and quality improvement, and we do not agree that reducing
the number of required measures will make those goals easier for
physicians to pursue. However, following the principle that the MIPS
performance categories should be aligned to enhance the program's
ability to improve care and reduce participation burden, we will
consider additional ways to align the quality and cost performance
category measures in the future as well as ways to further quality
improvement through the advancing care information and improvement
activities performance categories.
Comment: One commenter suggested limiting the available measures to
three detailed measures per medical discipline. The commenter suggested
that the criteria for choosing measures should be that they are related
to a public health goal and will ensure that patients with a chronic or
life-threatening condition are given a high level of care.
Response: We believe that performance should be measured on
measures that are most relevant and meaningful to clinicians. To that
end, we need to balance parsimony with ensuring that there are relevant
and meaningful measures available to the diverse array of MIPS eligible
clinicians.
Comment: One commenter expressed concern that there is a 30-month
gap between the selection of quality measures and when they are used;
commenter believes Core Quality Measure Collaborative (CQMC) core
measure sets need immediate integration into the final rule with
comment period.
Response: Measures that are to be implemented in the program must
undergo notice-and-comment rulemaking, as required by statute. Nearly
all of the measures that are a part of the CQMC core measure sets are
being finalized for implementation.
Comment: Several commenters stated that all measures used must be
clinically relevant, harmonized, and aligned among all public and
private payers and minimally burdensome to report. The commenters
stated the goal of such alignment would be to reduce measure
duplication and improve harmonization and, ultimately, build a national
quality strategy. Commenters recommended that CMS use measure sets
developed by the multi-stakeholder Core Quality Measures Collaborative,
as well as ensure that specialists are well represented in the effort
to align quality measures.
Response: Specialty societies are among the stakeholders that
participate in the Core Measures Collaborative, and we will continue to
work with specialists to align quality measures in the future. Further,
nearly all of the measures that are a part of the CQMC core measure
sets are being finalized for implementation.
Comment: One commenter supported the consideration of Pioneer ACO
required quality measures for use in MIPS. Another commenter requested
we allow quality reporting measures to be differentiated between
primary care and specialty physicians. For instance, we could use the
same quality reporting
[[Page 77143]]
structure as the Pioneer ACO Model for MIPS, and allow flexibility in
measures when considering reporting by an APM.
Response: MIPS eligible clinicians have the opportunity to report
by the CMS Web Interface if they are part of a group of at least 25
MIPS eligible clinicians. Pioneer ACOs were also required to use the
CMS Web Interface to submit their quality measures. In addition, many
of the quality measures that are included in the CMS Web Interface are
available for other data submission methods as well. Therefore, MIPS
eligible clinicians could report these same measures through other data
submission methods if they so choose or report measures from one of the
specialty-specific measure sets. If a MIPS eligible clinician
participates in an APM, then the APM Scoring Standard for MIPS Eligible
Clinicians Participating in MIPS APMs applies. As discussed further in
section II.E.5.h of this final rule with comment period, the APM
Scoring Standard outlines how the MIPS quality performance category
will be scored for MIPS eligible clinicians who are APM participants.
Comment: A few commenters disagreed with being rated on things over
which the commenters have no control (for example, A1c or Blood
Pressure). Further, other commenters asked CMS to use quality metrics
that captured activities under the physician's control and had been
shown to improve quality of care, enhance access-to-care, and/or reduce
the cost of care.
Response: Clinicians have the option to report measures that are
more relevant where they have control of the outcome and what is being
reported. We further believe that clinicians have the opportunity to
influence patients' actions and outcomes on their selected metrics,
which reflect systemic quality improvements of which MIPS eligible
clinicians are a part.
Comment: One commenter requested patient acuity measures to modify
the measures, which also alters clinician capability.
Response: We believe that the commenter is referring to the need to
risk adjust measures for patient acuity. We note that we allow for risk
adjustment if the measures have risk adjusted variables and methodology
included in their specifications.
Comment: One commenter requested clear instructions from CMS as to
how to choose quality measures since the concepts are extremely
confusing. Another commenter sought clarification regarding the quality
measures and submission of quality measures so that clinicians can
submit the measures with highest performance. The commenter requested
that CMS clearly define which measures are cross-cutting measures and
which are outcomes measures.
Response: We created the specialty sets to assist MIPS eligible
clinicians with choosing quality measures that are most relevant to
them. Other resources to help MIPS eligible clinicians choose their
quality measures will also be available on the CMS Web site. In
addition, we would encourage MIPS eligible clinicians to reach out to
their specialty societies for further assistance. We would also like to
note that the measure tables do indicate by use of a symbol which
measures are outcomes. We are not finalizing the cross-cutting measure
requirement.
Comment: One commenter recommended adequately testing new eCQMs to
confirm they are accurate, valid, efficiently gathered, reflects the
care given, and successfully transports using the quality reporting
document architecture format. Additionally, eCQMs should be endorsed by
NQF and undergo an electronic specification testing process.
Response: Thank you for your comments. We ensure that validity and
feasibility testing are part of the eCQM development process prior to
implementation. Although we strive to implement NQF-endorsed measures
when available, we note that lack of NQF endorsement does not preclude
us from implementing a measure that fulfills a gap in the measure set.
Comment: A few commenters requested only non-substantive changes in
eCQM measure sets and specifications, which do not require
corresponding changes in clinician workflow, should be made through
annual IPPS rulemaking while substantive changes (for example, a new
CQM or a change in a current CQM that requires a workflow change)
should be published in MIPS rulemaking and not go live until 18 months
after publication.
Response: We note that section 1848(q)(2)(D)(i)(II)(cc) of the Act
requires the Secretary to update the final list of quality measures
from the previous year (and publish such updated list in the Federal
Register) annually by adding new quality measures and determining
whether or not quality measures on the final list of quality measures
that have gone through substantive changes should be included in the
updated list. It is unclear why the commenters are suggesting that non-
substantive changes to MIPS eCQM measure sets and specifications should
be made through the annual IPPS rulemaking vehicle since the IPPS
proposed and final rules typically address policy changes for hospital
clinicians. We would use rulemaking for the MIPS program in the future
to address substantive changes to measures in the future.
Comment: A few commenters supported the development of a robust de-
novo measure set of eCQMs for use by specialty MIPS eligible clinicians
that are designed specifically to capture eCQM data as part of an EHR-
enabled care delivery for use in future iterations of the CMS Quality
Payment Program. One commenter believed eCQMs should be developed for
specialties to measure process improvement and improved outcomes where
data is not available in a standardized format and no national standard
has been codified.
Response: We encourage stakeholders to submit new electronically-
specified specialty measures for consideration during the annual call
for measures.
Comment: Some commenters encouraged closer alignment between MACRA
and EHR Incentive Program eCQM specifications and recommended using the
same version specifications for the same performance year for MIPS and
the EHR Incentive Program.
Response: We appreciate the comments; however, we note that there
is no overlap between the MIPS performance periods and the reporting
period for the Medicare EHR Incentive Program for EPs. We note that a
subset of the eCQMs previously finalized for use in the Medicare EHR
Incentive Program for EPs are being finalized as quality measures for
MIPS for the 2017 performance period.
Comment: One commenter disagreed with the overall complexity of the
quality performance category measures because the current available EHR
software offerings do not easily automate the work of capturing
measures.
Response: We understand that not all quality measurement may yet be
automated and share the concerns expressed. CMS and ONC also have
received similar feedback in response to its CQM certification criteria
within the ONC Health IT Certification Program.
Based on this feedback, ONC has added a requirement to the 2015
Edition ``CQM--record and export'' and ``CQM--import and calculate''
criteria that the export and import functions must be executable by a
user at any time the user chooses and without subsequent developer
assistance to operate. This is an example of one way ONC is
incentivizing more automated quality measurement through regulatory
requirements. In addition, CMS and
[[Page 77144]]
ONC will continue to work with health IT vendors and health IT product
and service vendors, as well as the stakeholders involved in measure
development to support the identification and capture of data elements,
and to test and improve calculations and functionality to support
clinicians and other health care providers engaged in quality reporting
and quality improvement.
Comment: One commenter wanted to know if CMS plans to continue
adding and removing measures from the group of 64 e-measures, as these
measures have not been modified for several years. They noted that
adding new measures to this set will require much more than 2 months'
notice in order for developers to implement them, especially given the
90 percent data completeness criteria placed on EHRs.
Response: We may propose to remove measures from the e-measures
group if they meet our criteria for removal from the MIPS. We are
lowering the data completeness criteria to 50 percent for the first
MIPS performance period. As new eCQMs are developed and are ready for
implementation, we will evaluate when they can be implemented into MIPS
and will consider developer implementation timeframes as well.
Comment: One commenter requested that CMS not significantly reduce
the number of available eCQMs as many small practices adopted EHRs for
their ability to capture and report quality data and lack sufficient
resources to invest in another reporting tool.
Response: We are revising the list of eCQMs for 2017 to reflect
updated clinical standards and guidelines. A number of eCQMs have not
been updated due to alignment with the EHR Incentive Program in the
past. This has resulted in a number of measures no longer being
clinically relevant. We believe the updated list, although smaller, is
more reflective of current clinical guidelines.
Comment: One commenter noted that CMS is proposing removal of 9 EHR
measures, and that while removal may be warranted, in some cases the
act of removal means that there are potential gaps for those who plan
to report quality using eCQMs. The commenter therefore recommended CMS
encourage measure developers to help fill these gaps.
Response: We would encourage measure developers to continue to
submit new electronically-specified measures for potential inclusion in
MIPS through the Call for Measure process.
Comment: One commenter wanted to know whether the number of
measures will be expanded for electronic reporting or whether the
additional measures are going to only be offered in Registry/QCDR
reporting option.
Response: In subsequent years, we expect more measures to be
available by electronic reporting but that will depend partly on
whether or not electronic measures are submitted via the annual Call
for Measures process.
Comment: One commenter supported the creation of a computer
adaptive quality measure portfolio and believed measures should be an
area of significant focus in the final rule with comment period,
including portability.
Response: We thank the commenter and agree that measures are an
area of significant focus in this final rule with comment period. We
look forward to learning more about private sector innovations in
quality measurement in the future.
Comment: A few commenters supported the option, but not the
requirement, that physicians select facility-based measures that are
aligned with physician's goals and have a direct bearing on the
physicians' practice. A commenter noted the challenge of clinicians and
groups which functions across multiple facilities and recommends
hospital-level risk-adjusted outcome measurement attributable to the
principal physician or group responsible for the primary diagnosis.
Response: We thank the commenters for their support and the
suggestion. We will consider proposing policies on this topic in the
future.
Comment: Some commenters supported the distinction between
hospitalists and other hospital-based clinicians from community
clinicians and recommended that CMS develop a methodology for the
second year of MIPS that will give facility-based clinicians the choice
to use their institution's performance rates as the MIPS quality score.
Another commenter recommended evaluation of 20 existing measures that
represent clinical areas of relevance to hospitalists and could be
adapted for MIPS, and indicates that the commenter's organization is
ready to work with CMS to develop facility-alignment options.
Response: We will take this feedback into account in the future.
Comment: One commenter stated that quality measures that apply to
primary care physicians should not be the same measures applied to
consulted physicians.
Response: We would like to note that there is a wide variety of
measures, and they do vary between those applicable to primary care
physicians and to other physicians, and that all participants may
select the measures that are most relevant to them to report.
Comment: Several commenters requested that CMS accept Government
Performance and Results Act (GPRA) measures that Tribes and Urban
Indian health organizations are already required to report as quality
measures to cut down on the reporting burden.
Response: There are many GPRA measures that are similar to measures
that already exist within the program. In addition, some GPRA measures
are similar to measures that are part of a CQMC core measure set. We
strive to lessen duplication of measures and to align with measures
used by private payers to the extent practicable. If there are measures
reportable within GPRA that are not duplicative of measures within
MIPS, we recommend the commenters work with measure owners to submit
these measures during our annual Call for Measures.
Comment: One commenter recommended CMS provide options for
specialties without a sufficient number of applicable measures such as:
determining which quality measures are applicable to each MIPS eligible
clinician and only holding them accountable for those measures;
addressing measure validity concerns with non-MAP, non-NQF endorsed
measures; establishing ``safe harbors'' for innovative approaches to
quality measurement and improvement by allowing entities to register
``test measures'' which clinicians would not be scored on but would
count as a subset of the 6 quality measures with a participation
credit; and allowing QCDRs flexibility to develop and maintain measures
outside the CMS selection process.
Response: We have intentionally not mandated that MIPS eligible
clinicians report on a specific set of measures as clinicians have
varying needs and specific areas of care. MIPS eligible clinicians
should report the measures applicable to the service they provide. All
measures, including those that are NQF endorsed, go through notice-and-
comment rulemaking. In regards to non-MAP and non-NQF endorsed
measures, we would like to note that these measures were reviewed by
the CQMC, an independent workgroup, which includes subject matter
experts in the field. Further, we would like to note that over 90
percent of the measures have gone through the MAP.
Comment: Another commenter suggested that CMS require that
outcomes-based measures constitute at least 50 percent of all quality
measures and that CMS accelerate the development and adoption of such
[[Page 77145]]
clinical outcomes-based measures, including patient survival. Some
commenters also suggested that CMS utilize measures that have already
achieved the endorsement of multiple stakeholders and have been
evaluated to ensure their rigor (for instance, through processes like
the National Quality Forum (NQF) endorsement).
Response: We encourage stakeholders to submit new specialty
measures for consideration during the annual call for measures. We
welcome specialty groups to submit measures for review to CMS that have
received previous endorsement. Furthermore, we are committed to
developing outcome measures and intend to work with interested
stakeholders through our Quality Measurement Development Plan which
describes our approach.
Comment: One commenter stated that it is concerning that the
proposed quality performance categories fail to explicitly mention
health equity as a priority. A few commenters recommended stratified
reporting on quality measures by race & ethnicity, especially quality
measures related to known health disparities. One commenter
specifically supported stratification by demographic data categories
that are required for Office of National Coordinator (ONC) for Health
Information Technology-certified electronic health records (EHRs).
Stratification allows for the examination of any unintended
consequences and impact of specific quality performance measures on
safety net eligible clinicians and essential community clinicians for
potential beneficiary/patient-based risk adjustment. Further,
commenters stated that stand-alone health equity quality measures
should be developed and incentivized with bonus points as high priority
measures. Commenter recommended patient experience to be kept as a
priority measure for a bonus point in the final rule with comment
period.
Response: We thank the commenter for this feedback on high-priority
measures and bonus point awarded for them. It is our intent that
measures actually examine quality for all patients, and some of our
measures have been risk-adjusted and stratified. We look forward to
continuing to work with stakeholders to identify appropriate measures
of health equity.
Comment: Several commenters supported adding the Medicaid Adult
Core Set, which is particularly important for people dually enrolled in
Medicare and Medicaid who have greater needs and higher costs.
Response: We thank the commenters for their support, and would like
to note that we are working to align the Medicaid core set with MIPS in
future years.
Comment: One commenter requested that CMS engage state Medicaid
leaders to maximize measure alignment across Medicare and Medicaid, and
articulate the functional intersection of various measure sets and
measure set development work (Sec. Sec. 414.1330(a)(1) and
414.1420(c)(2) and the Appendix in this final rule with comment
period). The commenter specifically encouraged alignment efforts to
focus on measures where there is a clear nexus between Medicare and
Medicaid populations (Sec. Sec. 414.1330(a)(1) and 414.1420(c)(2) and
Appendix in this final rule with comment period). With respect to
specific measures, the commenter had a particular interest in MIPS
measures that relate to the avoidance of long-term skilled care in the
elderly and disabled. The commenter believed that this is an area of
nexus between the two programs, as the majority of newly eligible
elderly in nursing facilities were unknown to the Medicaid program in
the timeframe immediately leading up to the long-term care stay. The
commenter believed this is a high priority for state Medicaid leaders
and federal partners to engage around quality measure alignment.
Response: We intend to align quality measures among all CMS quality
programs where possible, including Medicaid, and will take this comment
into account in the future.
Comment: One commenter suggested that CMS engage states to maximize
measure alignment across Medicare and existing State common measure
sets.
Response: We work with regional health collaboratives and other
stakeholders where possible, and we will consider how best to align
with other measure sets in the future.
Comment: A few commenters proposed that CMS align a set of quality
measures to Medicare Advantage measures to be able to compare
performance between APMs, FFS, and MAOs. Other commenters supported
ensuring that quality measures are aligned across reporting programs,
and build from the HVBP measures set when incorporating home health
into quality reporting programs.
Response: We will take these suggestions into account for future
consideration.
Comment: One commenter encouraged CMS to adopt measures in the
quality performance category that align with existing initiatives
focused on delivering care in a patient-centric manner. In particular,
the commenter suggested that CMS make sure the quality measures align
with the clinical quality improvement measures used in the Transforming
Clinical Practice Initiative by the Practice Transformation Networks.
Response: We purposely aligned the measures in the Transforming
Clinical Practice Initiative with those used in CMS' quality reporting
programs and value-based purchasing programs for clinicians and
practices. We will continue to work on alignment across such programs
as they evolve in the future.
Comment: One commenter noted that CMS might also look to align with
other measure sets that may be outside the health care sector such as
with other local health assessment and community or state health
improvement activities.
Response: We work with regional health collaboratives and other
stakeholders where possible, and we will consider how best to align
with other measure sets in the future.
Comment: One commenter believed that the Quality performance
category should include a reasonable number of measures that truly
capture variance in patient populations and that CMS should continue to
review these measures on an annual basis to ensure that they are
clinically relevant and address the needs of the general patient
population.
Response: It is within our process that we review the measures that
we are adopting for clinical relevance on an annual basis, and we
appreciate commenters' focus on ensuring that measures remain
clinically relevant.
Comment: One commenter did not believe current quality metrics
reflect metrics that are meaningful to physicians or patients.
Response: We respectfully disagree. Most of the current quality
measures have been developed by clinician organizations that support
the use of thoughtfully constructed quality metrics. We continue to
welcome recommendations or submissions of new measures for
consideration.
Comment: One commenter noted that in order for small, private
independent practices to demonstrate improved outcomes, the metrics
system must be designed to account for their successes.
Response: We are committed to developing outcome measures and
intend to work with interested stakeholders following the approach
outlined in our Quality Measurement Development Plan. While many
existing outcome measures are focused on institution level improvement
(such a tracking hospital readmissions), we believe there is an
opportunity to develop clinician practice outcome
[[Page 77146]]
measures that are designed to reflect the quality of large group, small
group, and individual practice types. We welcome submissions of new
outcome measures for consideration.
Comment: A few commenters suggested that CMS collect SES data for
race, ethnicity, preferred language, sexual orientation, gender
identity, disability status and social, psychological and behavioral
health status, to stratify quality measures and aid in eliminating
disparities. One commenter noted that use of 2014 and 2015 edition
CEHRT would reduce burden on clinicians to collect this data.
Response: The CMS Office of Minority Health (OMH) works to
eliminate health disparities and improve the health of all minority
populations, including racial and ethnic minorities, people with
disabilities, members of the lesbian, gay, bisexual, and transgender
(LGBT) community, and rural populations. In September 2015, CMS OMH
released the Equity Plan for Improving Quality in Medicare (CMS Equity
Plan), which provides an action-oriented, results-driven approach for
advancing health equity by improving the quality of care provided to
minority and other underserved Medicare beneficiaries.
The CMS Equity Plan is based on a core set of quality improvement
priorities that target the individual, interpersonal, organizational,
community, and policy levels of the United States health system in
order to achieve equity in Medicare quality. It includes six priorities
that were developed with significant input and feedback from national
and regional stakeholders and reflect our guiding framework of
understanding and awareness, solutions, and actions. They provide an
integrated approach to build health equity into existing and new
efforts by CMS and stakeholders.
Priority 1 of the CMS Equity Plan focuses on expanding the
collection, reporting, and analysis of standardized demographic and
language data across health care systems. Though research has
identified evidence-based guidelines and practices for improving the
collection of data on race, ethnicity, language, and disability status
in health care settings, these guidelines are often not readily
available to health care providers and staff. Preliminary research has
been conducted to determine best practices for collecting sexual
orientation and gender identity information in some populations, but
currently there are no evidence-based guidelines to standardize this
collection.
We will facilitate quality improvement efforts by disseminating
best practices for the collection, reporting, and analysis of
standardized data on race, ethnicity, language, sexual orientation,
gender identity, and disability status so that stakeholders are able to
identify and address the specific needs of their target audience(s) and
monitor health disparities.
Comment: One commenter stated that quality measures vary between
populations depending on practice location due to different outcomes.
Different outcomes are due to nutrition, reliable transportation, drug
addiction, safe living space, and more. Comparison between practices is
difficult.
Response: We understand the commenter's concern that any single
measure cannot capture the unique circumstances of a clinician's
community including some of the sociodemographic factors mentioned. Our
aim, however, is to drive quality improvement in all communities and we
believe thoughtfully constructed measures can help all clinician
practice types improve. Further, we will continue to investigate
methods to ensure all clinicians are treated as fairly as possible
within the program and monitor for potential unintended consequences
such as penalties for factors outside the control of clinicians.
Comment: A few commenters suggested that CMS commit to measures for
a set amount of time (for instance, 2-3 years) before making
substantial changes. One commenter suggested that CMS adopt a broader
policy of maintaining measures in MIPS for a minimum number of years
(for example, at least 5 years) to limit scenarios where CMS does not
have historical data on the same exact measure to set a benchmark or
otherwise evaluate performance.
Response: We understand the commenter's concern. However, we do not
believe it appropriate to commit to maintaining the same measures in
MIPS for a substantial period of time, because we are concerned about
the possibility that the measures themselves or the underlying medical
science may change. We believe MIPS must remain agile enough to ensure
that the measures selected for the program reflect the best available
science, and that may require dropping or changing measures so that
they reflect the latest best practices. For example, when a gap in
clinical care no longer exists, reporting the measure offers no benefit
to the patient or clinician.
Comment: One commenter encouraged CMS to indicate which measures
would be on the quality measure list for more than 1 year to allow
concentration of improvement efforts over a two to three-year period.
The commenter indicated that uncertainty on which measures may be
included on the list each year could negatively impact improvement
programs in rural areas that have fewer patients and would require a
longer time to determine if interventions are successful. Another
commenter requested that CMS limit additions and modifications to
quality measures, especially as MIPS eligible clinicians become
accustomed to reporting, to allow eligible clinicians sufficient time
to meet quality metrics.
Response: We would like to note that CMS conducts annual reviews of
all measures to ensure they are relevant, appropriate, and evidence
based. Therefore there is potential for updates to the annual list of
measures to be adopted on a yearly basis. We will make every effort to
ensure that the measures we adopt for the MIPS program reflect the
latest medical science, and we will also work to ensure that all
physicians and MIPS eligible clinicians are fully aware of the measures
that we have adopted.
Comment: A few other commenters recommended testing and comment
periods before new measures are added to assess for potential
unintended effects associated with healthcare disparities, including a
one-year transparency (report only) period before measures are phased
into incentives, a requirement for NQF endorsement.
Response: All of the measures selected for MIPS include routine
maintenance and evaluation to assess performance and identify any
unintended consequences. We have extensive measurement experience (such
as in the PQRS) and do not believe we need to delay measure
implementation to assess for unintended consequences. We further note
that the NQF endorsement process is separate and apart from the MIPS
measure selection process. We refer the commenter to NQF for their
recommendations on enhancements to the endorsement process.
Comment: One commenter was concerned about annual changes in the
performance measurement category and ability to respond to the changes
in an appropriate timeframe. Commenter proposed that a minimum of 9
months, and ideally 12 months, be given to review changes to the
performance categories each year.
Response: We understand commenter's concern, but we do not believe
this timeline to be operationally feasible given the Program's
statutory deadlines. We note that stakeholders have the ability to
begin reviewing
[[Page 77147]]
potential changes to the quality performance category and provide
comment on the potential changes with the publication of the proposed
rule each year.
Comment: One commenter discussed how quality measures encourage
shared decision making and patient centered care. They requested that
CMS require both over treatment and under treatment of patient as
specific quality measures in specific instances such as blood sugar and
blood pressure.
Response: We are looking at measures for appropriate use and are
working with numerous stakeholders to identify more appropriate use
measures.
Comment: One commenter encouraged CMS to align quality measures of
MIPS to Uniform Data System so FQHCs will be able to submit one set of
quality data one time to both Uniform Data System and CMS.
Response: We thank the commenter for this suggestion.
Comment: One commenter was concerned that clinicians could select
``low-bar'' quality measures, or measures that are not the best
representation of clinicians' patient populations or the diseases they
treat. Commenter requested that CMS monitor the selection of quality
measures by clinicians.
Response: We believe that MIPS eligible clinicians should have the
ability to select measures that they believe are most relevant to their
practice. Further, we would like to note that we conduct annual reviews
of all measures to ensure they are relevant, appropriate, and evidence
based.
After consideration of the comments, correcting, and revising
specific information, we are finalizing at Sec. 414.1330(a)(1) that
for purposes of assessing performance of MIPS eligible clinicians on
the quality performance category, CMS will use quality measures
included in the MIPS final list of quality measures. Specifically, we
are finalizing the Final Individual Quality Measures Available for MIPS
Reporting in 2017 in Table A of the Appendix in this final rule with
comment period. Included in Table B of the Appendix in this final rule
with comment period is a final list of quality measures that do not
require data submission. Newly proposed measures that we are finalizing
are listed in Table D of the Appendix in this final rule with comment
period. The final specialty-specific measure sets are listed in Table E
of the Appendix in this final rule with comment period. Measures that
we are finalizing for removal can be found in Table F of the Appendix
and measures that will have substantive changes for the 2017
performance period can be found in Table G of the Appendix in this
final rule with comment period.
(2) Call for Quality Measures
Each year, we have historically solicited a ``Call for Quality
Measures'' from the public for possible quality measures for
consideration for the PQRS. Under MIPS, we proposed to continue the
annual ``Call for Quality Measures'' as a way to engage eligible
clinician organizations and other relevant stakeholders in the
identification and submission of quality measures for consideration.
Under section 1848(q)(2)(D)(ii) of the Act, eligible clinician
organizations are professional organizations as defined by nationally
recognized specialty boards of certification or equivalent
certification boards. However, we do not believe there needs to be any
special restrictions on the type or make-up of the organizations
carrying out the process of development of quality measures. Any such
restriction would limit the development of quality measures and the
scope and utility of the quality measures that may be considered for
endorsement. Submission of potential quality measures regardless of
whether they were previously published in a proposed rule or endorsed
by an entity with a contract under section 1890(a) of the Act, which is
currently the National Quality Forum, is encouraged.
As previously noted, we encourage the submission of potential
quality measures regardless of whether such measures were previously
published in a proposed rule or endorsed by an entity with a contract
under section 1890(a) of the Act. However, consistent with the
expectations established under PQRS, we proposed to request that
stakeholders apply the following considerations when submitting quality
measures for possible inclusion in MIPS:
Measures that are not duplicative of an existing or
proposed measure.
Measures that are beyond the measure concept phase of
development and have started testing, at a minimum.
Measures that include a data submission method beyond
claims-based data submission.
Measures that are outcome-based rather than clinical
process measures.
Measures that address patient safety and adverse events.
Measures that identify appropriate use of diagnosis and
therapeutics.
Measures that address the domain for care coordination.
Measures that address the domain for patient and caregiver
experience.
Measures that address efficiency, cost and utilization of
healthcare resources.
Measures that address a performance gap or measurement
gap.
We requested comment on these proposals.
The following is summary of the comments we received regarding our
proposal for the Call for Quality Measures.
Comment: A few commenters supported the Call for Quality Measures
approach to encouraging the development of quality measures and the
list of considerations when submitting quality measures to MIPS. One
commenter believed the criteria should also include: measures which
span across the various phases of surgical care that align with the
patient's clinical flow: measures based on validated clinical data;
measures that can be risk-adjusted and include SDS factors, if
applicable; and process measures used in conjunction with outcome
measure to provide a more comprehensive picture of clinical workflow
and help link to improvement activities.
Response: We thank the commenter for their support and will
consider including these additional factors for evaluating quality
measures for potential inclusion in MIPS in the future. Further, we
will consider additional measures covering the five phases of surgical
care that the commenter specifies in the future. We have a rolling
period for new measure suggestions, and we welcome commenters'
nominations.
Comment: One commenter recommended that the proposed rule quality
measures emphasize patient experience, outcomes, shared decision
making, care coordination, and other measures important to patients.
One commenter believed the selection and development of measures should
include patients, stakeholders, consumers and advocates. The commenter
believes measures should be used to give feedback to clinicians and
recommended the CAHPS for MIPS survey and clinical data registries be
used to collect patient-reported data, and that individual clinician
level data be collected on performance.
Response: We agree that the selection and development of measures
should include patients, consumers, and advocates. We have included
patients, consumers, and advocates on the selection and development of
measures to promote an objective and balanced approach to this process.
Comment: One commenter recommended that CMS focus on
[[Page 77148]]
developing measures assessing physicians' communication with patients,
care coordination, and efforts to fill practice gaps, because commenter
believed these skills are more indicative of the care physicians
provide than outcome measures.
Response: We thank the commenter for this feedback. We have a
process in place for nominating measures for inclusion in the MIPS
program, including an annual call for measures and the Measures Under
Consideration (MUC) list, and we welcome stakeholders' feedback into
that process.
Comment: One commenter supported the inclusion of robust quality
measures. The commenter encouraged CMS to focus on including quality
measures under MIPS that target shared decision making and health
outcomes, including survival and quality of life. Commenter supported
outcome measures, but noted in certain circumstances, where there is a
well-defined link to outcomes, that process measure or intermediary
outcome measures may be most appropriate.
Response: Thank you for your comment. We agree that measures that
target shared decision making and health outcomes should be included in
MIPS.
Comment: One commenter stated that CMS should promote the adoption
of new quality measures that fill in measure gaps, accentuate the
benefits of innovation, and keep pace with evolving standards of
clinical care.
Response: Thank you for your comment. We agree we plan to work with
stakeholders on new measure development.
Comment: Some commenters suggested that CMS carefully consider the
selection of quality measures to ensure that they meaningfully assess
quality of care for patients with diverse needs, particularly those
patients with one or more chronic conditions.
Response: CMS is aware of the need for measures that address
diverse needs and encourages the development of these types of
measures.
Comment: One commenter believed that more patient safety measures
should be included. The commenter recommended that a culture of patient
safety be encouraged across healthcare organizations; that indicators
of physical and emotional harms be used to measure workforce safety;
that patient engagement be included as a measure of safety, beyond
patient satisfaction; and that measures to track and monitor
transparency, communication and resolution programs be added to the
MIPS portion of the proposed rule.
Response: We thank the commenter and agree that patient safety
should be encouraged across healthcare organizations. We note that we
consider patient safety measures to be high-priority measures.
Comment: One commenter recommended quality measures be redefined.
The commenter believed many are reporting burdens and are pedestrian
from a quality standpoint and have little to do with physician work.
Response: Our quality measures define a reference point for care
that is expected in the delivery of care. CQMs are tools that help
measure and track the quality of health care services provided by MIPS
eligible clinicians within our health care system. Measuring and
reporting these measures helps to ensure that our health care system is
delivering effective, safe, efficient, patient-centered, equitable, and
timely care. MIPS eligible clinicians are accountable for the care they
provide to our beneficiaries.
Comment: One commenter requested that when a MAV process is
invoked, the number of measures which could have been reported is
greater than the number of additional measures needed to satisfy the
reporting requirement.
Response: We did not propose a MAV process for the MIPS Program,
but we did propose, and will be finalizing, a data validation process.
This process will apply for claims and registry submissions to validate
whether MIPS eligible clinicians have submitted all applicable measures
when MIPS eligible clinicians submit fewer than six measures or do not
submit the required outcome measure or other high priority measure if
an outcome measure is not available, or submit less than the full set
of measures in the MIPS eligible clinicians' applicable specialty set.
Comment: One commenter suggested that CMS employ a more transparent
approach to measure selection for the MIPS program, including a
detailed rationale on why certain measures are not selected, providing
feedback to MIPS eligible clinicians and provider organizations which
have committed resources to improving measures.
Response: While we understand commenter's concern, we believe we
have been substantially transparent with the considerations we have
taken into account when developing the proposed measure list for MIPS
and have provided detailed rationale explaining the choices we have
made. In the appendix of this final rule with comment period, we have
provided a list of measures proposed for removal along with the
rationale. We would also like to note that measures that appear on the
MUC list are reviewed by the MAP and undergo detailed analyses, and we
refer stakeholders to the MAP's report for feedback on those measures.
We will continue working with stakeholders and measure developers to
improve their measures.
Comment: In an effort to increase transparency in the process, the
commenter suggested that prior to the publication of the
recommendations, CMS contact the measure developer to make sure CMS's
conclusions are accurate and to ensure the developer does not have data
to suggest otherwise.
Response: We review measures annually with measure owners and
stewards. Further, we provide feedback to measure developers on measure
being submitted through the Call for Measures process. Stakeholders
also have the opportunity to comment on new measures that are proposed
in the annual notice and comment process.
Comment: A few commenters suggested that CMS develop a plan to
transition from the use of process measures to outcomes measures to
allow MIPS eligible clinicians to adopt the most updated evidence-based
standards care and to ensure that MIPS eligible clinicians are truly
achieving the goals of value-based health care. One commenter
acknowledged that there is a large body of evidence showing that
process measures do not improve outcomes.
Response: We aim to have the most current measure specifications
updated annually. We also agree that outcome measures are more
appropriate for assessing health outcomes and for accountability. We
describe our measure development process in detail in our Quality
Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). We look forward to working with
stakeholders to develop a wide range of outcome measures.
Comment: One commenter expressed concern that CMS' proposal is too
focused on outcome measures while commenter believes the agency should
also focus on establishing meaningful process measures tied to
evidence-based outcomes. Another commenter noted that both outcome
measures and high quality, evidence-based process measures that address
gaps and variations in care have a role in improving care, and
cautioned CMS against too much emphasis on outcomes without regard to
evidence-based processes that underlie care.
[[Page 77149]]
Response: Although process measures will continue to play an
important role in quality measurement, we believe that they should be
tied to evidence based outcomes. As noted, we have a measure
development strategy that seeks to develop a wide range of outcome
measures but our plan will also provide for the development of both
process and structural measures that may be need to fill existing gaps
in measurement. We encourage the submission of measures that address
gaps in measurement, have significant variations in care, and also
outcome measures, including patient reported outcome measures.
Comment: Several commenters agreed that focusing more on the
outcome of a clinical intervention than the process of care is better
for patients and requested we adopted more outcome measures. Further,
outcome measures would yield the most meaningful data for consumers and
are true indicators of healthcare services.
Response: We agree that outcome measures are important and will
continue to emphasize the importance of outcomes measures in the
future. We also agree that outcome measures are more appropriate for
assessing health outcomes and for accountability. We describe our
measure development process in detail in our Quality Measure
Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). We look forward to working with stakeholders to
develop a wide range of outcome measures.
Comment: One commenter requested the outcome measures represent
clear care goals rather than intermediate process measures, thereby
allowing clinicians' freedom to determine the best allocation of
resources to improve clinical outcomes.
Response: We have made available numerous measures to include those
with intermediate outcomes. Although there are far fewer measures that
have intermediate outcomes we also agree that we should consider both
intermediate and long-term outcome measures for assessing overall
health outcomes and for accountability. We describe our measure
development process in detail in our Quality Measure Development Plan
(https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). We
look forward to working with stakeholders to develop a wide range of
outcome measures, including intermediate outcome measures.
Comment: Another commenter noted that, within the set of quality
measures that can be self-selected, 58 of the measures focus on
outcomes and 192 focus on process, and that only 9 focus on efficiency.
The commenter encouraged CMS to conduct additional research around
efficiency measures that could be added to the overall menu of measures
and, where available and clinically relevant to practice areas, MIPS
eligible clinicians should be required to report on an efficiency
measure. Some commenters believed that the relative imbalance of
process measures over outcome measures can undermine CMS's efforts to
encourage eligible clinicians to demonstrate actual improvements in a
patient's health status.
Response: We agree that there is a need for more outcome and
efficiency measures and will strive to achieve a more balanced
portfolio of measures in future years. As previously noted, we have a
measure development strategy that seeks to develop a wide range of
outcome measures but our plan will also provide for the development of
both process and structural measures that may still be need to fill
existing gaps in measurement. CMS encourages the submission of measures
that address gaps in measurement and have significant variations in
care. Outcome measures are a recognized gap in measurement, including
patient reported outcome measures, and we look forward to working with
stakeholders to develop a wide range of such measures.
Comment: One commenter recommended that as CMS selects measures, it
should include measures that capture variance across patient
populations; should consider adopting more outcome measures; and should
add measures related to coordination of care/exchange of information
between specialists and PCPs in all specialty categories.
Response: We agree with the commenter on the importance of these
measures and have proposed these types of measures for the program. We
would encourage the commenter to submit additional measures for
possible inclusion in MIPS through the Call for Measure process. We are
particularly interested in developing outcome measures for chronic
conditions (such as diabetes care and hypertension management) which
present a measurement challenge to capture the many factors that impact
the care and outcomes of patients with chronic conditions.
Comment: A few commenters agreed that outcome measures are very
important, but cautioned CMS against simply increasing the number of
such measures each year. Commenters also opposed the proposal to
increase the required number of patient experience measures in future
years because the physician lacks control over such measures. One
commenter supported the inclusion of risk adjustment and stratification
in measures and suggested that CMS examine ASPE's future
recommendations.
Response: We are aware of the need for measures that are adjusted
for case-mix variation through risk adjustment and stratification
techniques. As noted in this final rule with comment period, the
Secretary is required to take into account the relevant studies
conducted and recommendations made in reports under section 2(d) of the
Improving Medicare Post-Acute Transformation (IMPACT) Act of 2014.
Under the IMPACT Act, ASPE has been conducting studies on the issue of
risk adjustment for sociodemographic factors on quality measures and
cost, as well as other strategies for including SDS evaluation in CMS
programs. We will review the report when issued by ASPE and will
incorporate findings as appropriate and feasible through future
rulemaking. With respect to patient experience measures, we believe
that measures that assess issues that are important to patients are an
integral feature of patient-centered care.
Comment: One commenter requested that CMS continue to use both
process and outcome measures moving forward as a ramp-up tactic for
MIPS eligible clinicians new to reporting on quality measures.
Additionally, some commenters expressed particular support for measures
which track appropriate use. The commenters strongly believe that
especially in advanced illness, individuals should only receive
treatment that is aligned with their values and wishes but that many
times, because of a lack of advance care planning, there is overuse and
overtreatment at this time. Other commenters encouraged CMS to focus
efforts on the development of underuse measures that can serve as a
consumer protection for ensuring that eligible clinicians are not
limiting access to needed care in order to reduce costs.
Response: We agree with the importance of developing more measures
of appropriate use and seek to have more of these measure types for a
wider range of specialties, including geriatrics and palliative care.
Comment: A few commenters suggested that CMS should focus on
identifying and emphasizing measures that drive more robust outcomes.
The
[[Page 77150]]
commenters stated there are too many measures from which to choose.
Response: We appreciate the commenter's focus on the importance of
patient outcome measurement. However, we believe there remains a role
for process measures that are linked to specific health outcomes. We
would encourage the commenter to submit potential new measures for
inclusion in MIPS through the Call for Measures process.
Comment: A few commenters suggested that CMS use the
recommendations of the National Academy of Medicine's (NAM) 2015 Vital
Signs report to identify the highest priority measures for development
and implementation in the MIPS.
Response: We have reviewed the recommendations of the National
Academy of Medicine report and it informed our Quality Measure
Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf) which emphasizes the need for outcome measures over
process measures. We will continue to use the report as a resource to
inform future measurement policy development.
Comment: Several commenters supported the development and
strengthening of patient reported outcomes, PRO-based measures, and
patient experience quality measures as a component of the MACRA
proposed payment models. Further, commenters stated that patient-
generated data assesses issues that are important to patients and are a
key element of patient-centered care, enabling shared decision-making
and care planning, and ensuring that patients are receiving high-
quality health care services.
Response: We agree that PROs are important. Currently we have a
number of PRO measures and intend to expand their portfolio. We also
believe the other measure domains are important in measuring other
aspects of care.
Comment: One commenter recommended that patient reported outcomes
should have been given great weight, as well as continued solicitation
of multi-stakeholder input on the available required measures through
the NQF-convened MAP and updated patient sampling requirement over
time. The commenter also recommended that all clinicians in groups of
two or more should report a standard patient experience measure.
Response: We agree that patient-reported outcomes are important
quality measures. We note also that patient experience measures, while
not required, are considered high-priority and are incentivized through
the use of bonus points. However, patient-reported measurement
generally requires a cost to clinician practices to conduct the survey
and mandatory reporting of such measure may present a burden to many
clinicians, especially those in small and solo practices. In future
years, we will continue to seek methods of expanding reporting of these
measures without unduly penalizing practices that cannot afford the
measurement costs.
Comment: One commenter believed that it is necessary to
specifically call out and prioritize patient-reported outcomes (PROs)
and PRO-based measures (PROMs).
Response: We agree. We highlighted person and caregiver-centered
experience and outcome measures in the proposed rule (81 FR 28194) and
continue to believe that they appropriately emphasize the importance of
collecting patient-reported data.
Comment: One commenter recommended that CMS should encourage EHR
developers to incorporate PROMs, as well as development and use of
PROMs.
Response: We agree that the inclusion of PROMs in health IT systems
can help support quality improvement efforts at the provider level. As
PROMs begin to be electronically specified and approved for IT
development, testing and clinician use, we will work with ONC, health
IT vendors, and stakeholders engaged in measure development to support
the process of beginning to offer and support PROMs within certified
health IT systems.
Comment: One commenter recommended expediting the adoption of
patient-reported outcome measures (PROMs) for all public reporting
programs as well as condition-specific outcome sets that focus on the
longitudinal outcomes and quality-of-life measures that are most
important to patients.
Response: We agree with the commenter that PROMs are an important
aspect of assessing care quality, and we intend to continue working
with stakeholders to encourage their use. We refer readers to section
II.E.10. of this final rule with comment period for final policies
regarding public reporting on Physician Compare.
Comment: One commenter stated the quality metrics have nothing to
do with patient outcomes and measure process instead of results. The
commenter requested the metrics be shifted to clinical outcome
measures, including patient reported outcomes.
Response: We believe patient-reported outcomes are important as
well, but we respectfully disagree with commenter's characterization of
our measures.
Comment: One commenter recommended that CMS consider measures that
are validated and scientifically sound and to ensure measures address
existing clinical relevance, given that the existing vehicles for
measure inclusion has expanded to include qualified clinical data
registries and specialty measure sets. The commenter also recommended
that CMS consider working towards a set of core measures (similar to
what was implemented through the Core Quality Measures Collaborative)
that are most impactful to patient care. Further, they recommended that
CMS consider the adoption of more outcome measures, specifically those
using patient-reported outcomes.
Response: We thank the commenter for this feedback and agree. Our
intent is to include more outcomes measures in the MIPS Program as more
become available over time, and we are working with measure
collaboratives to include more measures and align them with other
health care payers. We believe the specialty measure sets ensure that
we have adopted measures of clinical relevance for specialists. We did
propose adoption of the majority of measures that were part of the CQMC
core measure sets into the MIPS program.
Comment: One commenter recommended that CMS consider paring down
from the list of over 250 quality measures from which a clinician may
self-select for quality reporting, and instead focus on the creation of
a smaller number of clinically relevant measures, particularly
including additional patient outcome measures where available, and
where there are separate and distinct outcomes measures. Additionally,
as CMS embarks on future iterative changes to the Quality Payment
Program, the commenter encouraged CMS to continue to rely on multi-
stakeholder and consensus driven feedback loops, such as Core Quality
Measures Collaborative, to inform additional core measure sets, where
such measure sets are useful and promote the appropriate comparisons.
Response: We appreciate the commenters concerns and note that we
intend to continue our work with the Core Quality Measures
Collaborative. We did propose adoption of the majority of measures that
were part of the CQMC core measure sets into the MIPS program. Further,
to help clinicians successfully report, it is important that we provide
as wide a range of measure options as possible that are germane to
[[Page 77151]]
the clinical practice of as many MIPS eligible clinicians as possible.
Comment: One commenter expressed concern related to the self-
selection of quality measures. The commenter noted that they
participated in the Core Quality Measures Collaborative (the
``Collaborative'') to assist in the development of evidence-based
measures and to help drive the health care system toward improved
quality, decision making, and value-based payment and purchasing. The
Collaborative recommended 58 MIPS quality measures. The commenter
suggested that CMS consider making it mandatory for clinicians to
report on those 58 measures when the measures are available within
appropriate categories and when the measures are clinically relevant.
Response: We have taken an approach to allow MIPS eligible
clinicians select their own measures for reporting based on
beneficiaries seen in their practices and the measures that are most
relevant to their clinical practice. However, we have included the CQMC
measures in the MIPS measure sets, including the specialty-specific
measure sets, to encourage their adoption into clinical practice.
Comment: A few commenters stated that CMS should ensure that
ongoing quality measurement in the quality performance category
encourages the appropriate use of imaging services that makes certain
that Medicare patients receive accurate and timely diagnoses.
Response: We are adopting a number of appropriate use measures that
track both over- and under-use of medical services. We encourage
stakeholders to submit additional measures on this topic, and will take
those submissions into account in the future.
Comment: One commenter expressed concern with the measures
available to clinicians because many of the Core Quality Measures
Collaborative measure sets were not included in the MIPS list and many
of the MIPS measures are not NQF endorsed. Some commenters recommended
that measures be approved by NQF before use in the program.
Response: We believe including 17 Core Quality Measures
Collaborative measures for the transition year is an excellent starting
point to promote measurement alignment with private sector quality
measurement leaders. While we encourage NQF-endorsement for measures,
we do not require that all measures be endorsed by the NQF before use
in the program, as requiring NQF endorsement would limit measures that
currently fill performance gaps. We continue to encourage measure
developers to submit their measures to NQF for endorsement.
Comment: A few commenters supported CMS encouragement in the
proposed rule of eliminating special restrictions as to the type and
make-up of the organization developing quality measures. Commenters
further supported the ability to submit measures regardless of whether
such measures were previously published in a proposed rule or endorsed
by NQF.
Response: We would like to note that while we prefer NQF-
endorsement of measures for MIPS, we do not require that new measures
for inclusion in MIPS be NQF-endorsed; however, in order for a measure
to be finalized for MIPS it must be published in the Federal Register.
Comment: A few commenters supported the proposed ``Call for Quality
Measures.'' Further, one commenter suggested that CMS use this process
to focus on specialty measures.
Response: We note that although we also conducted an annual Call
for Measures under PQRS, section 1848(2)(D)(ii) of the Act requires us
to conduct a Call for Quality Measure for MIPS annually.
Comment: One commenter supported allowing new quality measures to
be submitted by specialty societies with supporting data from QCDRs.
Response: We encourage specialty societies to continue to submit
new measures for potential inclusion in the MIPS program.
Comment: One commenter supported adoption of evidence-based
measures through the ``Call for Quality Measures'' process. The
commenter further suggested that CMS establish an interim process for
adoption of subspecialty quality measure sets until quality measures
can go through the ``Call for Quality Measures'' process so that CMS
may be able to quickly assess the commenter's members on clinically
meaningful measures.
Response: We thank the commenter for the recommendation; however,
we believe that the current process allows for careful review and
scrutiny of the measures. We note that the Call for Quality Measures is
open year-round, and that measures for inclusion in MIPS must go
through notice-and-comment rulemaking.
Comment: One commenter sought clarification regarding whether new-
process based measures will continue to be accepted.
Response: While we will consider new process based measures, we
would request that they be closely tied to an outcome and that there be
demonstrable variation in performance.
Comment: One commenter supported the flexibility CMS provided in
the proposed rule for health care providers to select measures that
make sense within their practice, as well as opening up the process for
the annual submission of new measures, which will allow MIPS to evolve
with the nation's dynamic health care system.
Response: Thank you for the support.
After consideration of the comments we are finalizing our proposal
to continue the annual ``Call for Quality Measures'' under MIPS.
Specifically, eligible clinician organizations and other relevant
stakeholders may submit potential quality measures regardless of
whether such measures were previously published in a proposed rule or
endorsed by an entity with a contract under section 1890(a) of the Act.
We do encourage measure developers and stakeholders to submit measures
for NQF-endorsement as this provides a scientifically rigorous review
of measures by a multi-stakeholder group of experts. Furthermore, we
are finalizing that stakeholders shall apply the following
considerations when submitting quality measures for possible inclusion
in MIPS:
Measures that are not duplicative of an existing or
proposed measure.
Measures that are beyond the measure concept phase of
development and have started testing, at a minimum.
Measures that include a data submission method beyond
claims-based data submission.
Measures that are outcome-based rather than clinical
process measures.
Measures that address patient safety and adverse events.
Measures that identify appropriate use of diagnosis and
therapeutics.
Measures that address the domain for care coordination.
Measures that address the domain for patient and caregiver
experience.
Measures that address efficiency, cost and utilization of
healthcare resources.
Measures that address a performance gap.
(3) Requirements
Section 1848(q)(2)(D)(iii) of the Act provides that, in selecting
quality measures for inclusion in the annual final list of quality
measures, the Secretary must provide that, to the extent practicable,
all quality domains (as defined in section 1848(s)(1)(B) of the Act)
are addressed by such measures and must ensure that the measures are
selected consistent with the process for selection of measures under
section 1848(k), (m), and (p)(2) of the Act.
[[Page 77152]]
Section 1848(s)(1)(B) of the Act defines ``quality domains'' as at
least the following domains: clinical care, safety, care coordination,
patient and caregiver experience, and population health and prevention.
We believe the five domains applicable to the quality measures under
MIPS are included in the NQS's six priorities as follows:
Patient Safety. These are measures that reflect the safe
delivery of clinical services in all health care settings. These
measures may address a structure or process that is designed to reduce
risk in the delivery of health care or measure the occurrence of an
untoward outcome such as adverse events and complications of procedures
or other interventions. We believe this NQS priority corresponds to the
domain of safety.
Person and Caregiver-Centered Experience and Outcomes.
These are measures that reflect the potential to improve patient-
centered care and the quality of care delivered to patients. They
emphasize the importance of collecting patient-reported data and the
ability to impact care at the individual patient level, as well as the
population level. These are measures of organizational structures or
processes that foster both the inclusion of persons and family members
as active members of the health care team and collaborative
partnerships with health care providers and provider organizations or
can be measures of patient-reported experiences and outcomes that
reflect greater involvement of patients and families in decision
making, self-care, activation, and understanding of their health
condition and its effective management. We believe this NQS priority
corresponds to the domain of patient and caregiver experience.
Communication and Care Coordination. These are measures
that demonstrate appropriate and timely sharing of information and
coordination of clinical and preventive services among health
professionals in the care team and with patients, caregivers, and
families to improve appropriate and timely patient and care team
communication. They may also be measures that reflect outcomes of
successful coordination of care. We believe this NQS priority
corresponds to the domain of care coordination.
Effective Clinical Care. These are measures that reflect
clinical care processes closely linked to outcomes based on evidence
and practice guidelines or measures of patient-centered outcomes of
disease states. We believe this NQS priority corresponds to the domain
of clinical care.
Community/Population Health. These are measures that
reflect the use of clinical and preventive services and achieve
improvements in the health of the population served. They may be
measures of processes focused on primary prevention of disease or
general screening for early detection of disease unrelated to a current
or prior condition. We believe this NQS priority corresponds to the
domain of population health and prevention.
Efficiency and Cost Reduction. These are measures that
reflect efforts to lower costs and to significantly improve outcomes
and reduce errors. These are measures of cost, utilization of
healthcare resources and appropriate use of health care resources or
inefficiencies in health care delivery.
Section 1848(q)(2)(D)(viii) of the Act provides that the pre-
rulemaking process under section 1890A of the Act is not required to
apply to the selection of MIPS quality measures. Although not required
to go through the pre-rulemaking process, we have found the NQF
convened Measure Application Partnership's (MAP) input valuable. We
proposed that we may consider the MAP's recommendations as part of the
comprehensive assessment of each measure considered for inclusion under
MIPS. Elements we proposed to consider in addition to those listed in
the ``Call for Quality Measures'' section of this final rule with
comment period include a measure's fit within MIPS, if a measure fills
clinical gaps, changes or updates to performance guidelines, and other
program needs. Further, we will continue to explore how global and
population-based measures can be expanded and plan to add additional
population-based measures through future rulemaking. We requested
comment on these proposals.
The following is summary of the comments we received regarding our
proposal on requirements for selecting quality measures.
Comment: A few commenters recommended that CMS continue to use the
Measure Application Partnership (MAP) pre-rulemaking process in
determining the final list of quality measures each year. One commenter
supported elimination of the requirement for recommendation by the MAP
for inclusion of MIPS quality measures and believed this could
potentially speed the process for implementing measures into MIPS.
Response: Prior to proposing new quality measures for
implementation into MIPS for the 2017 performance period, we did
consult the MAP for feedback. To view the MAP's recommendations on
these measures, please refer to the report entitled, ``MAP 2016
Considerations for Implementing Measures in Federal Programs:
Clinicians.'' (http://www.qualityforum.org/Publications/2016/03/MAP_2016_Considerations_for_Implementing_Measures_in_Federal_Programs__Clinicians.aspx). We intend to continue to consult the MAP for feedback
on proposed quality measures, but we retain the authority to propose
measures that have not been supported by the MAP.
Comment: Some commenters believed quality measures in MIPS should
go through a multi-stakeholder evaluation process and that CMS should
encourage the use of quality measures endorsed by the NQF.
Response: Most measures are NQF endorsed or have gone through the
pre-rulemaking process, but we retain the authority to adopt measures
that are not so endorsed. All measures have gone through rulemaking and
public comment process.
Comment: One commenter had concerns with the performance measures
currently used in PQRS, and therefore, recommended that any measures
CMS proposes to use outside of the core set identified by the Core
Quality Measures Collaborative be endorsed by the Measure Application
Partnership (MAP).
Response: We appreciate the comment to use measures identified by
the CQMC, and while we intend to consult with MAP on measures for MIPS,
we note that we have the authority to implement measures they have not
reviewed.
Comment: A few commenters recommended that quality measures should
prioritize patient-reported outcomes and promote goal-concordant care,
specifically that quality should be evaluated using a harmonized set of
patient-reported outcomes and other appropriate measures that
clinicians can reliably use to understand what matters to patients and
families, achieve more goal-concordant care, and improve the patient
and family experience and satisfaction. Another commenter suggested
that CMS's proposed Quality Payment Program approach for considering
value-based performance should expressly prioritize the patient and
family voice and the constellation of what matters to them as key
drivers of quality measures development and use.
Response: We note that person and caregiver centered experience
measures are considered high priority under MIPS. For this reason and
the reasons cited by commenters, we encourage the development and
submission of patent-
[[Page 77153]]
reported outcomes to the Call for Measures for the reasons cited by the
commenters.
Comment: One commenter recommended CMS include in the MIPS quality
requirements measures outcomes that align with an individual's stated
goals and values, commonly referred to as person-centered care,
believing that performance measures that promote individuals
articulating their goals and desired outcomes hold the system
accountable for helping people achieve their goals and preferences. The
commenter suggested that CMS reference the National Committee on
Quality Assurance's work on long term services and supported measures
and person centered outcomes using a standardized format to form a
basis for building person centered metrics into MIPS and APMs.
Response: We will take this into consideration for use in the
future.
Comment: A few commenters suggested making global and population-
based measures optional. Reclassifying these measures as ``population
health measures'' under the quality category does not fix the inherent
problems with these measures. Commenters suggested that CMS not include
the three population health measures in the quality category.
Response: We believe the population health measures are intended to
incentivize quality improvement throughout the health care system, and
we therefore believe that we have appropriately placed them under the
Quality performance category. However, as discussed in section
II.E.5.b. of this final rule with comment period, CMS will only
finalize the all-cause readmission measure because the other population
measures have not been fully tested with the new risk-adjusted
methodology.
Comment: One commenter expressed support for measures that address
all six of the NQS domains. For the Patient Safety domain, commenter
especially supported measures designed to reduce risk in the delivery
of health care (for example, adverse events and complications from
medication use). For the Communication and Care Coordination category,
the commenter pointed out that for pharmacists, ensuring
interoperability and bidirectional communication in this area is
extremely critical.
Response: We encourage MIPS eligible clinicians to select and
report on measures that are applicable to their practices, regardless
of their assigned domain, ultimately to improve the care of their
beneficiaries.
Comment: One commenter supported CMS aligning the MIPS quality
measure domain of patient and caregiver experience with the National
Quality Strategy's domain person and caregiver-centered experience and
outcomes among the six required domains, believing it will improve
patient centered care.
Response: We appreciate the support. We support the measures in all
domains, to include measures that embrace patient-centered care and
involvement.
After consideration of the comments, we are finalizing the
requirements for the selection of the Annual MIPS Quality Measures.
Specifically, we will categorize measures into the six NQS domains and
we intend to place future MIPS quality measures within the NQF convened
Measure Application Partnership's (MAP), as appropriate. We intend to
consider the MAP's recommendations as part of the comprehensive
assessment of each measure considered for inclusion under MIPS.
(4) Peer Review
Section 1848(q)(2)(D)(iv) of the Act, requires the Secretary to
submit new measures for publication in applicable specialty-
appropriate, peer-reviewed journals before including such measures in
the final annual list of quality measures. The submission must include
the method for developing and selecting such measures, including
clinical and other data supporting such measures. We believe this
opportunity for peer review helps ensure that new measures published in
the final rule with comment period are meaningful and comprehensive. We
proposed to use the Call for Quality Measures process as an opportunity
to gather the information necessary to draft the journal articles for
submission from measure developers, measure owners and measure stewards
since we do not always develop measures for the quality programs.
Information from measure developers, measure owners and measure
stewards will include but is not limited to: background, clinical
evidence and data that supports the intent of the measure;
recommendation for the measure that may come from a study or the United
States Preventive Services Task Force (USPSTF) recommendations; and how
this measure would align with the CMS Quality Strategy. The Call for
Quality Measures is a yearlong process; however, to be aligned with the
regulatory timelines, establishing the proposed measure set for the
year generally begins in April and concludes in July. We will submit
new measures for publication in applicable specialty-appropriate, peer-
reviewed journals before including such measures in the final annual
list of quality measures. We requested comments on this proposal.
Additionally, we solicited comment on mechanisms that could be used,
such as the CMS Web site, to notify the public that the requirement to
submit new measures for publication in applicable specialty-
appropriate, peer-reviewed journals is met. Additionally, we solicited
comment on the type of information that should be included in such
notification.
The following is summary of the comments we received regarding the
submission of MIPS quality measures to a peer reviewed journal.
Comment: One commenter supported the proposal that new measures
must be submitted to peer reviewed journals.
Response: We thank the commenter for their support.
Comment: One commenter recommended that CMS use the Call for
Quality Measures process as an opportunity to gather the information
necessary to draft the journal articles required for quality measures
implemented under MACRA. Commenter also recommended that any
information required for journal article submission should align with
the information required for the submission of the measure to CMS to
reduce the workload of this new requirement on measure developers.
Response: We appreciate the support and recommendation and intend
to utilize the Call for Quality Measures process to gather information
necessary to draft the journal articles.
Comment: One commenter agreed that CMS should be responsible for
submitting new measures for publication in applicable specialty-
appropriate, peer-reviewed journals before including such measures in
the final list of measures annually. The commenter agreed the public
requirement will help ensure measures are both meaningful and
comprehensive, but requested that CMS ensure a more collaborative
approach to the submission of measures to peer-reviewed journals. A few
commenters requested that CMS allow measure developers the right to
first submit measure sets to specialty specific, peer-reviewed journals
of their choice. One commenter was concerned that there are
difficulties with the timing and sequencing of submitting new measures
in that, with the requirement to submit new measures for publication in
applicable specialty appropriate peer reviewed journals before
including such measure, many journals will be very
[[Page 77154]]
reluctant to publish measures that are already in the public domain,
and the July 1 measure deadline provides a narrow window for
publication. Another commenter noted that most peer-reviewed medical
journals only contained ground-breaking research. Therefore, they would
not be a good source of information about quality measurement and
improvement. The commenter was concerned that this criterion for
approving new quality measures would be a significant barrier.
Response: We thank the commenters; however, we are required by
statute to submit measures for publication in a peer-reviewed journal
before including them in the final list of measures. Although we may
collaborate with the measure owner to accurately capture the measure
specifications, we cannot fulfill our statutory obligation by allowing
the measure owner to submit the article. The statute requires the
Secretary to submit new measures for publication in applicable
specialty-appropriate, peer-reviewed journals before including such
measures in the final annual list of quality measures. We would like to
note, however, that this does not preclude a measure owner from
independently submitting their measure for publication in a peer-
reviewed journal.
Comment: One commenter recommended that CMS accept measures
independently published in peer reviewed journals as well as measures
submitted by CMS.
Response: We appreciate the suggestion; however, we are required by
statute to submit measures for publication in a peer-reviewed journal
before including them in the final list of measures for MIPS.
Comment: One commenter sought clarity on the process for submitting
new measures for publication in specialty-appropriate, peer-reviewed
journals prior to including measures in the final list, and suggested
an abbreviated peer review process for publication to ensure there will
not be slowdowns in the process of getting measures into the MIPS
quality program.
Response: It is our intent to illustrate this process via
subregulatory guidance that will be posted on our Web site. Further, we
would like to note that we only have an obligation to submit the
measure for publication. If the submission is not accepted for
publication, we will still have met the statute requirement. If the
submission is accepted, which is our preference, we are not obligated
to delay our rulemaking process until the date the journal chooses to
publish the submission.
Comment: One commenter believed that the proposed process requiring
that HHS submit measures for publication in applicable specialty-
appropriate, peer-reviewed journals was highly duplicative of the work
of measure developers; would infringe on measure ownership and
copyright; and would ultimately limit the availability of and
significantly delay the use of measures in MIPS. The commenter
appreciated the exceptions to the rule for measures in QCDRs and those
included in existing CMS programs, the commenter recommended this
exclusion be extended to all measures published in a peer-review
journal prior to their submission to CMS. The commenter believes that
extending the exclusion would allow measure developers to maintain
their ownership, copyright, prevent duplication, and ensure measures
were not stagnated in the peer review and publication process.
Response: The statute requires the Secretary to submit new measures
for publication in applicable specialty-appropriate, peer-reviewed
journals before including such measures in the final annual list of
quality measures. Further, we would like to note that we only have an
obligation to submit the measure; we do not have to wait for the
measures to be published. Even if the article is not published, we will
have met the requirements under section 1848(q)(2)(D)(iv) of the Act.
We believe that the summary of proposed new quality measures will help
increase awareness of quality measurement in the clinician community
especially for clinicians or professional organizations that are not
aware of the ability to provide public comment on proposed quality
measures through the rulemaking process. We will only submit new
measures in accordance with applicable ownership or copyright
restrictions and cite the measure developer's contribution in the
submission.
Comment: One commenter recommended that new measures be posted to
journals associated with the American Board of Medical Specialties
(ABMS), related subspecialty journals or journals associated with the
American College of that specialty and non-ABMS recognized clinical
specialty journals that are trusted resources for specialists to ensure
a wide range of readership and distribution.
Response: We will take these recommendations into consideration for
the future.
Comment: Some commenters supported and appreciated the
clarification that CMS will be submitting new measures for publication
in applicable specialty appropriate, peer-reviewed journals before
including such measures in the final list of measures annually.
Commenters requested that CMS ensure a more collaborative approach to
the submission of measures to peer-reviewed journals, possibly through
societies that routinely publish guidelines in their peer-reviewed
journals.
Response: We appreciate the support. We will continue to seek input
regarding our approach to the submission of measures from measure
owners and specialty societies to improve the annual new measure
submission process.
Comment: One commenter recommended that CMS collaborate with a
national, multi-stakeholder organization that can provide expertise on
measurement science, quality improvement, and expertise on data
submission mechanisms, such as clinical registries, to develop
alternative approaches to the peer review process. Commenter expressed
support for a process whereby new measures are subject to external
expert review and recommended that such review occur in an expedient
manner, and that results be made available and maintained as measures
are updated.
Response: Although we believe there is value in having external
expert review of new measures, we note that we are required by statute
to submit new measures to an applicable, specialty-appropriate peer-
reviewed journal.
Comment: One commenter stated that until the USPSTF recommendation
process is substantially reformed so that specialist physicians are
consulted as part of its recommendation process, CMS should proceed
with great caution before incorporating any future USPSTF
recommendations into MIPS quality measures.
Response: We are committed to engaging all stakeholders in our
measure development and selection process. We note that the annual call
for measures and the annual measure update provides for the
participation of patient, eligible clinician, and clinician
stakeholders, including specialists, and allows for a transparent and
robust review of our quality measure development and selection process.
Comment: One commenter recommended a quicker timeline for including
quality measures after they had been published in a peer-reviewed
journal; specifically, if a measure is already published in a peer-
reviewed
[[Page 77155]]
journal, the commenter recommended that the timeline for approval for
MIPS be 6-12 months.
Response: We appreciate the comments; however, new measures, even
if they have been previously published, can only be included in MIPS
through notice and comment rulemaking. Further, there is a statutory
requirement that we publish the new measures not later than November 1
prior to the first day of the applicable performance period for a given
year.
After consideration of the comments, we are finalizing our proposal
to use the Call for Quality Measures process as a forum to gather the
information necessary to draft the journal articles for submission from
measure developers, measure owners and measure stewards since we do not
always develop measures for the quality programs. Information from
measure developers, measure owners and measure stewards shall include
but is not limited to: Background, clinical evidence and data that
supports the intent of the measure; recommendation for the measure that
may come from a study or the United States Preventive Services Task
Force (USPSTF) recommendations; and how this measure would align with
the CMS Quality Strategy. The submission of this information will not
preclude us from conducting our own research using Medicare claims
data, Medicare survey results, and other data sources that we possess.
We will submit new measures for publication in applicable specialty-
appropriate, peer-reviewed journals before including such measures in
the final annual list of quality measures.
(5) Measures for Inclusion
Under section 1848(q)(2)(D)(v) of the Act, the final annual list of
quality measures must include, as applicable, measures from under
section 1848(k), (m), and (p)(2) of the Act, including quality measures
among: (1) Measures endorsed by a consensus-based entity; (2) measures
developed under section 1848(s) of the Act; and (3) measures submitted
in response to the ``Call for Quality Measures'' required under section
1848(q)(2)(D)(ii) of the Act. Any measure selected for inclusion that
is not endorsed by a consensus-based entity must have an evidence-based
focus. Further, under section 1848(q)(2)(D)(ix), the process under
section 1890A of the Act is considered optional.
Section 1848(s)(1) of the Act, as added by section 102 of the
MACRA, also requires the Secretary of Health and Human Services to
develop a draft plan for the development of quality measures by January
1, 2016. We solicited comments from the public on the ``Draft CMS
Measure Development Plan'' through March 1, 2016. The final CMS Measure
Development Plan was finalized and posted on the CMS Web site on May 2,
2016, available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf.
(6) Exception for QCDR Measures
Section 1848(q)(2)(D)(vi) of the Act provides that quality measures
used by a QCDR under section 1848(m)(3)(E) of the Act are not required
to be established through notice-and-comment rulemaking or published in
the Federal Register; be submitted for publication in applicable
specialty-appropriate, peer-reviewed journals, or meet the criteria
described in section 1848(q)(2)(D)(v) of the Act. The Secretary must
publish the list of quality measures used by such QCDRs on the CMS Web
site. We proposed to post the quality measures for use by qualified
clinical data registries in the spring of 2017 for the initial
performance period and no later than January 1 for future performance
periods.
Quality measures that are owned or developed by the QCDR entity and
proposed by the QCDR for inclusion in MIPS but are not a part of the
MIPS quality measure set are considered non-MIPS measures. If a QCDR
wants to use a non-MIPS measure for inclusion in the MIPS program for
reporting, we propose that these measures go through a rigorous CMS
approval process during the QCDR self-nomination period. Specific
details on third party intermediaries' requirements can be found in
section II.E.9 of the proposed rule. The measure specifications will be
reviewed and each measure will be analyzed for its scientific rigor,
technical feasibility, duplication to current MIPS measures, clinical
performance gaps, as evidenced by background, and literature review,
and relevance to specialty practice quality improvement. Once the
measures are analyzed, the QCDR will be notified of which measures are
approved for implementation. Each non-MIPS measure will be assigned a
unique ID that can only be used by the QCDR that proposed it. Although
non-MIPS measures are not required to be NQF-endorsed, we encourage the
use of NQF-endorsed measures and measures that have been in use prior
to implementation in MIPS. Lastly, we note that MIPS eligible
clinicians reporting via QCDR have the option of reporting MIPS
measures included in Table A in the Appendix in this final rule with
comment period to the extent that such measures are appropriate for the
specific QCDR and have been approved by CMS. We requested comment on
these proposals.
The following is a summary of the comments we received regarding
our proposals on QCDR measures.
Comment: One commenter supported CMS's proposed exception for QCDR
measures.
Response: We appreciate the support.
Comment: Some commenters agreed that non-MIPS measures implemented
in QCDRs should be analyzed for scientific rigor, technical
feasibility, duplication to current MIPS measures, clinical performance
gaps, as evidenced by background and literature review, and relevance
to specialty practice quality improvement.
Response: We appreciate the support.
Comment: One commenter stated that quality measures developed by
QCDRs should not be subject to an additional CMS verification process
before they are used for MIPS reporting and that an additional process
is problematic for specialty areas such as oncology where there are
deficiencies in the quality measure set for these types of practices.
The commenter further believed the additional verification and approval
processes appear as micro-managing the QCDR-developed measures process
which could undermine the goals of QCDR reporting and creates
additional burden given mature QCDRs such as the Quality Oncology
Practice Initiative have already undergone an extremely robust and
evidenced-based process to ensure clinical validity and reliability.
The commenter further stated that additional uncertainty, restraints
and regulatory burden should not be placed on these QCDRs. The
commenter did support focusing on evaluating the QCDR measure
development methodology during the self-nomination process instead.
Response: While we do not wish to add burden to QCDRs, we do need
to maintain an appropriate standard for measures used in our program,
especially since MIPS payment adjustments are based on the quality
metrics.
Comment: One commenter recommended that CMS publish the specific
criteria that they plan to use in evaluating QCDR measures moving
forward. Some commenters requested that if CMS decides to deny the use
of a measure in a QCDR, that CMS provide the measure developer/steward/
owner with specific information on what criteria were not met that led
to a
[[Page 77156]]
measure not being accepted for use and provide a process for immediate
reconsideration when the issues have been addressed.
Response: Criteria were already adopted under PQRS and proposed
under MIPS (see 81 FR 28284) for non-MIPS measures. In the future, we
may publish supplemental guidance. In addition, measures should be
fully developed prior to submission, and we intend to provide necessary
feedback in a timely fashion.
Comment: A few commenters supported CMS's proposal for non-MIPS
measures in QCDRs to go through a rigorous CMS approval process during
the QCDR self-nomination period, and encouraged CMS to engage in a
multi-stakeholder process as part of this approval process. One
commenter recommended adopting an approval process for QCDR measures
that would require them to be endorsed by the NQF.
Response: We intend to take the multi-stakeholder process's views
into account when adopting policies on this topic in the future. We
retain the authority to adopt measures that have not been endorsed by
NQF, and we do not believe it appropriate to commit to requiring
endorsement.
Comment: One commenter did not agree that CMS should support new
measures developed by QCDRs.
Response: We respectfully disagree because we believe that QCDRs
offer MIPS eligible clinicians the opportunity to report on measures
associated with their beneficiaries that otherwise they may not be able
to report.
Comment: A few commenters recommended that CMS encourage QCDRs to
submit their measures for review by a consensus-based standards
organization, like the NQF. One commenter suggested that CMS publish
data for these measures to promote greater understanding of the use of
QCDR measures and performance trends.
Response: The QCDRs develop new measures and propose them for
consideration into our programs. We review all proposed measures and
consider them for inclusion based on policy principles described in our
Quality Measure Development Plan (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). Although we do not require NQF
endorsement for measure approval and acceptance, we expect all
submitted measures to have had a rigorous evaluation including an
assessment for feasibility, reliability, strong evidence basis, and
validity. All of our measures, regardless of NQF endorsement status,
are thoroughly reviewed, undergo rigorous analysis, presented for
public comment, and have a strong scientific and clinical basis for
inclusion. QCDR measures must be approved by us before they can be made
available for use by MIPS eligible clinicians.
Comment: One commenter approved of the use of QCDRs but is
concerned that if QCDR measures are not part of the MIPS quality
measure set and must undergo a thorough approval process by CMS, this
will delay adoption of MIPS eligible measures and limit opportunities
for transparency and stakeholder input to ensure measures are evidence-
based and clinically rigorous. The commenter suggested that subjecting
these measures to a formal endorsement process, such as National
Quality Forum (NQF) endorsement, could help ensure that QCDR measures
enjoy broad, consensus-based support through a process of thorough
review and public vetting.
Response: We agree that ideally measures developed by QCDRs would
be submitted to NQF for endorsement. However, we will not require NQF-
endorsement and will continue to review measures submitted by QCDRs
prior to their implementation in the MIPS program. We believe that
QCDRs allow specialty societies and others to develop more relevant
measures for specialists that can be implemented more rapidly and
efficiently.
Comment: A few commenters expressed concern with CMS's
``stringent'' approach to QCDR measures as they believe it may be too
burdensome. Commenters stated that QCDR measures should continue to be
developed by a multi-stakeholder processes by the relevant specialty
societies and reviewed by CMS in the QCDR approval process, but they
should not be required to undergo MAP and NQF processes that are too
time consuming to allow such developments to keep pace with constantly
changing CMS requirements.
Response: We would like to note that QCDR measures are not required
to undergo MAP and NQF processes.
Comment: One commenter supported flexibility with regard to the
measures that are available for reporting by physicians and also
supported the statutory provision that does not require that QCDR
developed measures to be NQF-endorsed.
Response: We appreciate the comment and support.
Comment: One commenter expressed concern with the need for CMS to
encourage reporting of NQF measures. The commenter noted that obtaining
NQF endorsement can be costly, time consuming and not the only way to
ensure that measures are sound. The commenter expressed concern that
the language will be interpreted as a requirement for NQF endorsement
and encouraged CMS to reconsider the language. Another commenter
opposed all measures being required to be endorsed by NQF for use in
QCDRs because: requiring QCDR measures to go through NQF would go
against CMS's goal of quickly iterating measures; the NQF process is
cost and resource prohibitive for smaller specialties; such a revision
would reduce the flexibility of QCDRs to offer specialty-specific
reporting measures, which provide broader options that may be more
meaningful to some practices than existing PQRS measures; and QCDRs
provide a better picture of the overall quality of care provided,
because QCDRs collect and report quality information on patients from
all payers, not just Medicare patients.
Response: We would like to note that NQF endorsement is not a
requirement for QCDR or MIPS measures. However, we do encourage
application for NQF endorsement because it provides a rigorous
scientific and consensus based measures evaluation.
Comment: One commenter expressed support for the use of quality
measures that are used by QCDRs such as the Quality Oncology Practice
Initiative (QOPI), which is designated as a QCDR and focuses
specifically on measuring and assessing the quality of cancer care.
However, the commenter expressed concern over the process for approval
of QCDR measures, stating that CMS should not slow the continued use of
existing, robust QCDR measures; decrease adoption of innovative,
clinically relevant QCDR measures; or weaken the protections that
exempt quality measures developed for use in a QCDR from many of the
measure development process required for other MIPS measures.
Response: We understand the commenters concern and will continue to
review QCDR measures in a timely fashion. Further, we would like to
note that the approval criteria are not changing.
Comment: One commenter supported the CMS approach to non-MIPS
measures used by QCDRs, including the caution about ``check box''
measures. Commenter expressed concern that the measurement of cancer
care planning could become one such measure. Instead, the commenter
suggested that care planning measures be developed as
[[Page 77157]]
patient engagement/experience measures.
Response: We thank the commenter for the recommendation and will
take under consideration for future years. We note that, consistent
with clinicians submitting quality data through other reporting
mechanisms, those submitting quality data through QCDRs must meet our
requirements for one outcome measure, or, if one is not applicable, one
high-priority measure.
Comment: A few commenters recommended that CMS allow QCDRs to
utilize measures from other QCDRs (with permission). One commenter
further stated that CMS proposed that QCDR non-MIPS measures must go
through a rigorous approval process and then be assigned a unique
identifier that can only be used by the QCDR that proposed the measure.
Commenters believe that prohibiting the sharing of non-MIPS quality
measures between QCDRs would inhibit the efficient and cost-effective
use and dissemination of such measures.
Response: We allow a QCDR to use a measure with permission from the
measure owner, which may be a QCDR in some instances. Further, if the
QCDR would like the measure to be shared among other clinicians, they
can submit the measure to be included in the Program, where it would
not be limited to that specific QCDR. Any measure needs only a single
submission for the measure approval process.
Comment: One commenter recommended that CMS not require or restrict
a QCDR from licensing its proprietary quality measures to other QCDRs
after the QCDR-developed measures become available for MIPS reporting.
Response: We do not restrict but in fact encourage the sharing of
QCDR-developed quality measures with clinicians and also other QCDRs.
Comment: One commenter requested that CMS clarify that the QCDR-
developed measures available for 2016 PQRS reporting would
automatically qualify for 2017 MIPS quality reporting.
Response: QCDR guidelines evolve over time as we continue to learn
from implementation. We expect that measures in a QCDR 1 year would be
expected to be retained for the next, however, we will review measures
each year to ensure they are still relevant and meet scientific
standards. Further, we would like to note that all QCDRs that were
previously approved for PQRS will not be ``grandfathered'' as qualified
under MIPS. Rather the QCDR must meet the requirements as described in
section II.E.9.a. of this final rule with comment period.
Comment: One commenter indicated that requiring data collection in
2017 for measures not already included in a QCDR represents a myriad of
technical challenges. QCDRs' development and modifications require
partnering with a number of developers that program code and develop
software updates to facilitate reporting. Software developer often
require 9-12 months to update data elements. In addition, time is
required to train practice staff on how to enter new data and integrate
measures into the practice workflow.
Response: We thank the commenter for the support of the QCDR
program and understand the concern of the time involved in doing this
work. We believe that QCDRs that implement and support non-MIPS
measures are aware of the measure specifications in enough time to
reliably work with developers to make system changes. Since these
measures are owned by the QCDR or their partners, we believe they
already know the changes needed prior to the submission of the measure
for inclusion in the program.
Comment: One commenter asked CMS to modify the QCDR self-nomination
process to allow measures that have been approved in prior years a
period of stability by automatic measure approval for a period of at
least 3 years, which would allow physicians and developers a period of
assured measure inclusion.
Response: The QCDR measures are reviewed annually to ensure they
are still appropriate for use in the program. We thank the commenter
for the recommendation and will consider for future years.
Comment: One commenter suggested that CMS streamline the process
for measure inclusion into MIPS beyond the accommodations that have
been made for QCDRs and recommended that CMS consider the development
of an ``open source'' QCDR that would allow small specialty
organizations the opportunity to take advantage of the benefits of
QCDRs for measure development, thereby shortening the process for
inclusion in MIPS.
Response: It is not our intent to expand QCDR types at this time,
but we will take this suggestion into consideration for future
rulemaking.
Comment: One commenter supported the inclusion of outcome measures
and other high priority measures for QCDRs, as well as the optional
reporting of cross cutting measures by those clinicians who find those
measures relevant to their practice. However, the commenter did not
support mandating cross cutting measures requirements, especially for
QCDRs since it contradicts the intent of this submission mechanism,
which is to give clinicians broad flexibility over determining which
measures are most meaningful for their specialized practice.
Response: CMS believes that there are basic standards that each
physician, regardless of their specialty, can and should perform.
Additionally, the MIPS program offers payment incentives and MIPS
payment adjustments based on the value of care patients receive. Having
across-cutting set of measures will allow for direct comparisons among
participants. We would like to note, however, that as discussed in
section II.E.5.b. of this final rule with comment period, we are not
finalizing the cross-cutting measure requirement.
Comment: One commenter requested that CMS compile the list of
entities qualified to submit data as a QCDR, and that CMS accept the
Indian Health Service (IHS) Resource and Patient Management System
(RPMS) and other Tribal health information systems as a QCDR and work
with IHS and Tribes to ensure health information systems are capable of
meeting MIPS reporting requirements.
Response: CMS posts a list of approved QCDRs on its Web site
annually. Entities are required to self-nominate to participate in MIPS
as a QCDR. Entities that meet the definition of a ``QCDR'' at Sec.
414.1305 and meet the participation requirements outlined in section
II.E.9 of this final rule with comment period will be approved as a
QCDR.
Comment: One commenter requested that CMS consider employing a MAV
process for QCDRs or at minimum clarifying its intent for using such a
process. The commenter stated that even in QCDRs certain clinicians do
not have enough measures to report.
Response: QCDRs are required to go through a rigorous approval
process that requires both their MIPS and non-MIPS measures be
submitted at time of self-nomination. Since QCDRs have the ability to
have up to 30 non-MIPS measures approved for availability to the MIPS
eligible clinicians we anticipate that very few MIPS eligible
clinicians who utilize the QCDR mechanism would not have measures
applicable to them.
Comment: One commenter recommended that CMS not score non-MIPS QCDR
measures in their first year as commenter does not believe they will
have good benchmarking data.
Response: The non-MIPS measures approved for use within QCDRs are
required to have benchmarks when possible and appropriate.
[[Page 77158]]
Comment: One commenter requested that CMS consider allowing QCDRs
to determine the appropriate reporting sample (number or percentage) on
a measure by measure basis.
Response: We will consider this recommendation in future rulemaking
as we review the impact of such a change. However, we believe that the
reporting sample must be of sufficient size to meet our reliability
standards.
Comment: One commenter supported that the proposed rule established
a quality measure review process for those measures that are not NQF-
endorsed or included on the final MIPS measure list to assess if the
quality measures have an evidence-based focus, and are reliable and
valid.
Response: We appreciate the comment and support.
Comment: One commenter did not support CMS's proposal to support
new measures developed by QCDRs because the commenter believed quality
measures should go through a rigid evaluation and review process. The
commenter believed CMS should focus on streamlining quality reporting
by gradually eliminating excessive measures.
Response: We would like to note that all QCDR measures undergo a
rigorous approval process before receiving approval.
Comment: One commenter indicated that allowing for the inclusion of
non-MIPS quality measures via QCDRs will introduce more inconsistency
and burden and result in data that cannot be compared across states/
regions/providers, depending on their QCDR of origin.
Response: Acceptance of non-MIPS QCDR measures is to support
specialty groups' ability to report on measures most relevant to their
practice. QCDRs operate on a large scale, many at a national level, and
offer valid and reliable measure data.
After consideration of the comments, we are finalizing at Sec.
414.1330(a)(2) our proposal that for purposes of assessing performance
of MIPS eligible clinicians on the quality performance category, CMS
will use quality measures used by QCDRs. In the circumstances where a
QCDR wants to use a non-MIPS measure for inclusion in the MIPS program
for reporting, those measures will go through a CMS approval process
during the QCDR self-nomination period. We also are finalizing our
proposal to post the quality measures for use by qualified clinical
data registries in the spring of 2017 for the initial performance
period and no later than January 1 for future performance periods.
(7) Exception for Existing Quality Measures
Section 1848(q)(2)(D)(vii)(II) of the Act provides that any quality
measure specified by the Secretary under section 1848(k) or (m) of the
Act and any measure of quality of care established under section
1848(p)(2) of the Act for a performance or reporting period beginning
before the first MIPS performance period (herein referred to
collectively as ``existing quality measures'') must be included in the
annual list of MIPS quality measures unless removed by the Secretary.
As discussed in section II.E.4 of the proposed rule, we proposed that
the performance period for the 2019 MIPS adjustment would be CY 2017,
that is, January 1, 2017 through December 31, 2017. Therefore, existing
quality measures would consist of those that have been specified or
established by the Secretary as part of the PQRS measure set or VM
measure set for a performance or reporting period beginning before CY
2017.
Section 1848(q)(2)(D)(vii)(I) of the Act provides that existing
quality measures are not required to be established through notice-and-
comment rulemaking or published in the Federal Register (although they
remain subject to the applicable requirements for removing measures and
including measures that have undergone substantive changes), nor are
existing quality measures required to be submitted for publication in
applicable specialty-appropriate, peer-reviewed journals.
The following is a summary of the comments we received regarding
our proposal on the Exception for Existing Quality Measures.
Comment: Some commenters expressed preference for leveraging
existing quality measures to ensure consistency of measurement.
Response: The vast of majority of measures that we are finalizing
for the MIPS quality performance category are existing PQRS measures.
Comment: One commenter suggested that CMS conduct robust assessment
of previously developed quality measures to ensure that the measures
improve patient care and outcomes before introducing or maintaining
those measures in the MIPS Program.
Response: We routinely review all of our existing measures through
a maintenance and evaluation process that assess for the clinical
impact on quality and any unintended consequences. We are committed to
utilizing measures that improve patient care and outcomes.
After consideration of comments received from stakeholders on our
proposals for exceptions to existing quality measures, we are
finalizing our policies as proposed. While CMS has modified its
performance period proposal as discussed in section II.E.4 of this
final rule with comment period, this policy would not be affected since
the minimum 90-day performance period would not begin any earlier that
January 1, 2017.
(8) Consultation With Relevant Eligible Clinician Organizations and
Other Relevant Stakeholders
Section 1890A of the Act, as added by section 3014(b) of the
Affordable Care Act, requires that the Secretary establish a pre-
rulemaking process under which certain steps occur for the selection of
certain categories of quality and efficiency measures, one of which is
that the entity with a contract with the Secretary under section
1890(a) of the Act (that is, the NQF) convenes multi-stakeholder groups
to provide input to the Secretary on the selection of such measures.
These categories are described in section 1890(b)(7)(B) of the Act and
include the quality measures selected for the PQRS. In accordance with
section 1890A(a)(1) of the Act, the NQF convened multi-stakeholder
groups by creating the MAP. Section 1890A(a)(2) of the Act requires
that the Secretary make publicly available by December 1 of each year a
list of the quality and efficiency measures that the Secretary is
considering under Medicare. The NQF must provide the Secretary with the
MAP's input on the selection of measures by February 1 of each year.
The lists of measures under consideration for selection are available
at http://www.qualityforum.org/map/.
Section 1848(q)(2)(D)(viii) of the Act provides that relevant
eligible clinician organizations and other relevant stakeholders,
including state and national medical societies, must be consulted in
carrying out the annual list of quality measures available for MIPS
assessment. Section 1848(q)(2)(D)(ii)(II) of the Act defines an
eligible clinician organization as a professional organization as
defined by nationally recognized specialty boards of certification or
equivalent certification boards. Section 1848(q)(2)(D)(viii) of the Act
further provides that the pre-rulemaking process under section 1890A of
the Act is not required to apply to the selection of MIPS quality
measures.
Although MIPS quality measures are not required to go through the
pre-rulemaking process under section 1890A of the Act, we have found
the
[[Page 77159]]
MAP's input valuable. The MAP process enables us to consult with
relevant EP organizations and other stakeholders, including state and
national medical societies, patient and consumer groups and purchasers,
in finalizing the annual list of quality measures. In addition to the
MAP's input this year, we also received input from the Core Quality
Measure Collaborative on core quality measure sets. The Core Quality
Measure Collaborative was organized by AHIP in coordination with CMS in
2014. This multi-stakeholder workgroup has developed seven condition or
setting-specific core measure sets to help align reporting requirements
for private and public health insurance providers. Sixteen of the newly
proposed measures under MIPS were recommended by the Core Quality
Measure Collaborative and many of the remaining measures in the core
sets were already in the PQRS program and have been proposed for MIPS
for CY 2017.
The following is a summary of the comments we received regarding
consultation with relevant eligible clinician organizations and other
relevant stakeholders.
Comment: A few commenters applauded the work that went into
establishing the measures that went in to MIPS. The commenters
suggested CMS continue to work with all stakeholders to align quality
measures with those used in the private sector.
Response: We intend to continue to work with stakeholders to
further align the MIPS quality measures with those used in the private
sector.
Comment: Several commenters encouraged CMS to engage as broad an
array of stakeholder organizations as possible in the measure review
and selection process, noting that physicians and healthcare facility
stakeholders, relevant task forces, provider groups, including nurses,
physician assistants, nurse practitioners, patients, and caregivers
should be included. Further, the commenters requested CMS implement new
opportunities for stakeholders to participate in the measure
development process.
Response: Part of the process for measure adoption is the public
comment period, and we use the public comment period to enable all
relevant stakeholders of all types, including the various stakeholders
listed above, to provide feedback on measures that we have proposed for
the Program.
Comment: One commenter encouraged CMS to keep measure developers,
clinicians, and stakeholders engaged in the quality measure development
and selection process to ensure the implementation of clinically
meaningful measures that are aligned across the MACRA Quality Payment
Program performance pathways and other payer programs.
Response: We will continue to keep measure developers, clinicians,
and stakeholders engaged in the quality measure development and
selection process as evidenced by the multiple opportunities to provide
input to the measure development and selection process.
Comment: A few commenters stated that CMS should work broadly with
stakeholders, including patients and patient advocacy organizations to
identify and address measures gaps. Further, these stakeholders could
provide insight on patient experience and satisfaction measures, as
well as measures of care planning and coordination. Increasingly,
patient advocacy organizations are working to develop such measures
based on their own registry data. Commenters encouraged CMS to commit
to acting as a resource for those stakeholders that have less
experience with the measures submission process, to encourage their
participation in the process. Commenter also encouraged CMS to identify
disease states for which commenters have articulated gaps in quality
measures, and determine the feasibility of adopting measures based upon
consensus-based clinical guidelines upon which CMS could solicit
comments.
Response: We appreciate the recommendations and will engage with
all stakeholders, including patient and consumer organizations. We
provide a wide array of support and information about our measure
development process. Our Measure Development Plan for stakeholders'
provides clear guidance on this process (available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf. We
will take these suggestions into consideration in the future.
Comment: One commenter suggested that CMS look to and work with
International Consortium for Health Outcomes Measurement (ICHOM) to
develop additional and needed outcome measures and references MEDPACs
June 2014 report.
Response: We will continue to collaborate with stakeholders that
develop outcome measures for quality reporting.
Comment: One commenter recommended that CMS collaborate with
specialty societies, frontline clinicians, and EHR vendors in the
development, testing, and implementation of measures with a focus on
integrating the measurement of and reporting on performance with
quality improvement and care delivery and on decreasing clinician
burden.
Response: We agree it is important to continuously enhance the
integration of health IT support for quality measurement and
improvement with safe, effective care delivery workflows that minimize
burdens on the clinician, patient, and clinical relationship. We will
take the commenter's recommendation into consideration as we develop,
test, and implement new measures.
Comment: One commenter recommended that CMS carefully review
measure sets and defer to medical professional specialty society
comments to ensure that measure sets are appropriately constructed. The
commenter recommended that CMS obtain insight from clinicians who will
be reporting these services to test the validity of the measure sets.
Response: We will continue to work with specialty groups to improve
the specialty measure sets in the future.
Comment: Several commenters recommended that CMS use the core
measure sets developed by the Core Quality Measures Collaborative
because using these measure sets would ensure alignment, harmonization,
and the avoidance of competing quality measures among payers.
Response: Measures that are a part of the CQMC core measure sets
have been proposed for implementation and CMS intends to continue its
collaboration with the CQMC to ensure alignment and harmonization in
quality measure reporting.
Comment: One commenter recommended that CMS consider the
recommendations made by the American College of Physicians (ACP)
Performance Measurement Committee with regard to measure selection
within MIPS.
Response: The ACP, like all other professional societies, has the
opportunity to comment and provide feedback on our measure selection,
including their recommendations, through the notice and comment
process.
Comment: One commenter stated CMS has not adequately involved
physicians in the measure development process.
Response: All Technical expert panels (TEPs) for measures developed
by CMS or a CMS contractor include a clinical expert. Additionally, the
majority of measures in the program are not developed by CMS but by
medical specialty societies.
[[Page 77160]]
Comment: One commenter suggested that CMS account for the
professional role of the Advanced Practice Registered Nurse (APRN) and
all appropriate stakeholders who provide clinical services to
beneficiaries when creating and evaluating quality measures. The
commenter suggested that CMS ensure the committees and Technical Expert
Panels tasked with developing quality measures include nurses.
Response: We value the expertise of APRNs in providing patient care
and we will consider their participation in the future.
Comment: One commenter believed CMS should continue to work with
stakeholders to make the process for selection of quality measures
clear and well defined. The commenter encouraged CMS to focus on
getting new, relevant measures into the program within a shorter
timeframe. The commenter believed that a 2-year submission to
implementation interval would hinder introduction of new measures into
MIPS through the traditional approach. The commenter believed there
will be growth in measures submitted to the program through QCDRs in
the future.
Response: We do not develop most of the measures, but rather
measure stewards/owners submit their measures to CMS for consideration
and implementation. We will work with measure developers and other
stakeholders to continue to try and shorten the timeframe for measure
development and implementation and to make the process as efficient as
possible.
Comment: One commenter requested that CMS promote and disseminate
research on which process improvement measures have proven to be the
most effective at improving clinical outcomes.
Response: We will take this under consideration and will continue
working with clinicians to promote best practices and the highest
quality healthcare for clinicians and Medicare beneficiaries.
Comment: One commenter believes we should consider how to work with
measure developers to integrate patient preferences into measure
design.
Response: We agree with the commenter and believe the patient
experience and incorporation of patient preferences are important
components of healthcare quality.
Comment: Commenters recommended that CMS consult with relevant
eligible clinician organizations and other relevant stakeholders and
reminded CMS that the MACRA statute does not require CMS to utilize the
NQF MAP to provide guidance into the pre-rulemaking process on the
selection of MIPS quality measures, but requires the Secretary to
consult with relevant eligible clinician organizations, including state
and national medical societies. To strengthen the pre-rulemaking
process, commenters recommended that CMS address issues with the MAP
around: voting options on individual measures; discussion and treatment
of existing measures undergoing maintenance review; timelines for
commenting on MAP recommendations; the make-up of the MAP coordinating
committee and workgroups; and the sometimes inadequate notice for
public comment (for example, agendas are often not available until
close to the day of a MAP meeting). In addition, the commenters
reminded CMS that requiring measure developers to propose measures to
the MAP for use in CMS programs introduces another time-consuming step
in the measure development cycle, and that MACRA provides CMS the
flexibility in terms of how it uses the MAP.
Response: We appreciate their feedback about the MAP, and the
commenters correctly note that we retain the authority to adopt
measures without MAP's recommendations. We will continue to work with
the NQF on optimizing the MAP process and will take the commenters'
recommendations into consideration in future rulemaking.
(9) Cross-Cutting Measures for 2017 and Beyond
Under PQRS we realized the value in requiring EPs to report a
cross-cutting measure and have proposed to continue the use of cross-
cutting measures under MIPS. The cross-cutting measures help focus our
efforts on population health improvement and they also allow for
meaningful comparisons between MIPS eligible clinicians. Under MIPS, we
proposed fewer cross-cutting measures than those available under PQRS
for 2016 reporting; however, we believe the list contains measures for
which all patient-facing MIPS eligible clinicians should be able to
report, as the measures proposed include commonplace health improvement
activities such as checking blood pressure and medication management.
We proposed to eliminate some measures for which the reporting MIPS
eligible clinician may not actually be providing the care, but are just
reporting another MIPS eligible clinician's performance result. An
example of this would be a MIPS eligible clinician who never manages a
diabetic patient's glucose, yet previously could have reported a
measure about hemoglobin A1c based on an encounter. This type of
reporting will likely not help improve or confirm the quality of care
the MIPS eligible clinician provides to his or her patients. Although
there are fewer proposed cross-cutting measures under MIPS, in previous
years some measures were too specialized and could not be reported on
by all MIPS eligible clinicians. The proposed cross-cutting measures
under MIPS are more broadly applicable and can be reported on by most
specialties. Non-patient facing MIPS eligible clinicians do not have a
cross-cutting measure requirement. The cross-cutting measures that were
available under PQRS for 2016 reporting that are not being proposed as
cross-cutting measures for 2017 reporting are:
PQRS #001 (Diabetes: Hemoglobin A1c Poor Control).
PQRS #046 (Medication Reconciliation Post Discharge).
PQRS #110 (Preventive Care and Screening: Influenza
Immunization).
PQRS #111 (Pneumonia Vaccination Status for Older Adults).
PQRS #112 (Breast Cancer Screening).
PQRS #131 (Pain Assessment and Follow-Up).
PQRS #134 (Preventive Care and Screening: Screening for
Clinical Depression and Follow-Up Plan).
PQRS #154 (Falls: Risk Assessment).
PQRS #155 (Falls: Plan of Care).
PQRS #182 (Functional Outcome Assessment).
PQRS #240 (Childhood Immunization Status).
PQRS #318 (Falls: Screening for Fall Risk).
PQRS #400 (One-Time Screening for Hepatitis C Virus (HCV)
for Patients at Risk).
While we proposed to remove the above listed measures from the
cross-cutting measure set, these measures were proposed to be available
as individual quality measures available for MIPS reporting, some of
which have proposed substantive changes.
The following is a summary of the comments we received regarding
our proposal on cross-cutting measures for 2017 and beyond.
Comment: Some commenters supported the proposal to require
reporting at least one cross-cutting measure, and suggested that CMS
support the development of additional cross-cutting measures.
Response: We appreciate the support; however, as discussed in
section II.E.5.b. of this final rule with comment period, we are not
finalizing the cross-cutting measure requirement in an effort
[[Page 77161]]
to reduce program complexity as part of the transition year of CY 2017.
Comment: Several commenters requested that CMS provide a broader
selection of cross-cutting measures to choose from. Further stating
that the list is not robust enough to allow all clinicians to meet this
requirement.
Response: We appreciate the suggestion; however, as discussed in
section II.E.5.b. of this final rule with comment period, we are not
finalizing the cross-cutting measure requirement as part of the
transition year of CY 2017.
Comment: One commenter requested that all eligible clinicians must
receive clear and timely notification of all cross-cutting and outcome
measures before the start of the reporting period so that they can
select and plan for a full year of quality improvement activities.
Response: We appreciate the recommendation; however, as discussed
in section II.E.5.b. of this final rule with comment period, we are not
finalizing the cross-cutting measure requirement as part of the
transition year of CY 2017.
Comment: Numerous commenters did not agree with requiring all
patient facing clinicians to report one cross-cutting measure. The
commenters did not believe there were measures that are important or
informative for some procedural or technical sub-specialties and that
they are difficult to understand and implement. Further, one commenter
believes that the cross-cutting measures appear to be measures that
will be applicable for multiple clinicians types rather than
cross[hyphen]sectional measures, or anything that would push for
community collaboration.
Response: We appreciate the feedback and would like to note that,
as discussed in section II.E.5.b. of this final rule with comment
period, we are not finalizing the cross-cutting measure requirement as
part of the transition year of CY 2017.
Comment: One commenter stated that Non-patient facing clinicians
should be exempt from reporting a cross cutting measure.
Response: We would like to note that non-patient facing clinicians
would have been exempt from reporting a cross-cutting measure. Further,
as discussed in section II.E.5.b of this final rule with comment
period, we are not finalizing the cross-cutting measure requirement as
part of the transition year of CY 2017.
Comment: A few commenters recommended that CMS work with
stakeholders to develop cross-cutting measures for non-patient facing
MIPS eligible clinicians, as these MIPS eligible clinicians play an
important role in ensuring safe, appropriate, high-quality care. The
commenters supported allowing non-patient facing MIPS eligible
clinicians to report through a QCDR that can report non-MIPS measures.
Response: We appreciate the recommendation; however, as discussed
in section II.E.5.b. of this final rule with comment period, we are not
finalizing the cross-cutting measure requirement as part of the
transition year of CY 2017.
Comment: A few commenters objected to the requirement that
clinicians report one cross-cutting measure chosen from a list of
general quality measures because it is counter to the statute's intent
to allow eligible clinicians who report via QCDR the flexibility to
select measure that are most relevant to their practice. The commenters
urged CMS to remove the requirement that physicians reporting the
quality performance category via QCDR must report on one cross-cutting
measure.
Response: We appreciate the commenters' feedback; however, as
discussed in section II.E.5.b. of this final rule with comment period,
we are not finalizing the cross-cutting measure requirement as part of
the transition year of CY 2017.
Comment: Several commenters disagreed with our proposal to remove
various measures from the cross-cutting measure set. We also received
support for some of the measures we proposed to include, as well as
comments on measures that commenters did not support. Additionally, we
received several recommendations of additional quality measures for
potential inclusion in the cross-cutting measure set.
Response: We appreciate the commenters' feedback and would like to
note that we are not finalizing the cross-cutting measure requirement
as part of the transition year of CY 2017. We would also like to note
that the measures that were proposed for the cross-cutting measure set
are still listed as available measures under Table A of the appendix in
this final rule with comment period.
As a result of the comments, and based on our other finalized
policies, we are not finalizing the set of cross-cutting measures as
proposed to reduce the complexity of the program. Rather we are
incorporating these measures within the MIPS individual (Table A) and
specialty measure sets (Table E) within the appendix of this final rule
with comment period. We continue to value the reporting of cross-
cutting measures to incentivize improvements in population health and
in order to be better able to compare large numbers of physicians on
core quality measures that are important to patients and the health of
populations. We understand that many clinicians believe that cross-
cutting measures may not apply to them. We are seeking additional
comments in this final rule with comment period from the public for
future notice-and-comment rulemaking on approaches to implementation of
cross-cutting measures in future years of the MIPS program that could
achieve these program goals and be meaningful to MIPS eligible
clinicians and the patients they serve.
d. Miscellaneous Comments
We received a number of comments for this section that are not
related to specific measure proposals as well as comments spanning
multiple measure proposals that contained common themes. We have
summarized those comments below.
Comment: Numerous commenters made requests for new measures to be
included in the annual list of quality measures. For example, we
received several comments requesting additional measures be added that
pertain to palliative care and behavioral-health.
Response: We appreciate the commenters' suggestions. We would
encourage the commenters to submit potential new measures for inclusion
in MIPS through the Call for Quality Measures process.
Comment: Numerous commenters made requests for changes to existing
measure specifications. For example, some commenters requested
encounter codes be added or removed from measure specifications or
certain denominator criteria be expanded to include additional target
groups for various measures.
Response: Although CMS has authority over all of its quality
programs and measure changes within those programs, we also work with
measure owners regarding the updates to measures. Measure changes are
not automatically implemented within quality programs. We may adopt
changes to measures in two ways: (1) For measures with substantive
changes, the changes must be adopted through notice-and-comment
rulemaking. Generally, measures with substantive changes are proposed
through rulemaking and open for comment. (2) For measures with non-
substantive or technical changes, we can consider implementing the
changes through subregulatory means.
Comment: Numerous commenters made requests for additional specialty
measure sets, as well as modifications to the proposed specialty
measure sets.
[[Page 77162]]
Response: We appreciate the commenter's suggestions. We plan to
work with the measure developers and specialty societies to
continuously improve and expand the specialty-measure sets in the
future. Further, several comments were not specific enough as to the
measures that would be appropriate to the specialty measure set or
where there were not enough measures within the current measure set to
provide a sufficient number of measures for the specific specialty set.
In instances where we received comments that were specific enough to
develop or modify the specialty measure sets, and which we believed
were appropriate, we have included those updates along with the
rationale for those changes in the measure tables in the appendix.
Comment: We received several requests to update measure steward
information in the measure tables located in the appendix.
Response: We appreciate the commenters' feedback and have made the
necessary updates to the measures steward information in the measure
tables.
Comment: Some commenters asked that physician led specialty
organizations be able develop evidence-based quality guidelines of
their own and proceed with a simple attestation procedure to document
compliance.
Response: As discussed in section II.E.5.c. of this final rule with
comment period, we have an annual call for measures where clinicians
have the opportunity to submit additional measures covering the
services that they provide. We have also made available a measure
development plan for stakeholders' review, available at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf.
While we recognize the simplicity of simple attestation, we believe it
is important to receive actual performance information on how an MIPS
eligible clinician or group reported, not just whether they did the
measure.
Comment: A few commenters requested the adoption of appropriate use
criteria (AUC) as quality measures to ensure the best care for
patients. The commenters recommended that the specialty areas covered
by the AUCs include: Radiology, cardiology, musculoskeletal (includes
specialized therapy management, interventional pain, large joint
surgery, spine surgery), radiation therapy, genetics and lab
management, medical oncology, sleep medicine, specialty drug, and post-
acute care. In addition, the commenters recommended that AUC be derived
from leading specialty societies, be incorporated from current peer-
reviewed medical literature, have input from subject matter expert
clinicians and community-based physicians, be available to any eligible
clinicians free of charge on a Web site, and have a proven track record
of effectiveness in a wide range of practice settings. The AUC should
be subject to oversight and review by nationally recognized,
independent accrediting bodies, and be reviewed annually.
Response: We are finalizing quality measures that are based on the
AUC in this rule.
Comment: One commenter promoted the value of palliative care and
encouraged CMS to monitor the effects of MACRA, specifically the
quality and cost performance categories, on patient access to health
care providers, particularly palliative care providers.
Response: We appreciate the suggestion. We intend to monitor the
effects of the MIPS program on all aspects of care.
We have considered the comments received and will take them into
consideration in future notice-and-comment rulemaking.
e. Cost Performance Category
(1) Background
(a) General Overview and Strategy
Measuring cost is an integral part of measuring value. We envision
the measures in the MIPS cost performance category would provide MIPS
eligible clinicians with the information they need to provide
appropriate care to their patients and enhance health outcomes. In
implementing the cost performance category, we proposed to start with
existing condition and episode-based measures, and the total per capita
costs for all attributed beneficiaries measure (total per capita cost
measure). We also proposed that all cost measures would be adjusted for
geographic payment rate adjustments and beneficiary risk factors. In
addition, a specialty adjustment would be applied to the total per
capita cost measure. We proposed that all of the measures attributed to
a MIPS eligible clinician or group would be weighted equally within the
cost performance category, and there would be no minimum number of
measures required to receive a score under the cost performance
category. Lastly, we indicated that we plan to draw on standards for
measure reliability, patient attribution, risk adjustment, and payment
standardization from the VM as well as the Physician Feedback Program,
as we believe many of the same measurement principles for cost
measurement in the VM are applicable for measurement in the cost
performance category in MIPS (81 FR 28196).
We proposed that all measures used under the cost performance
category would be derived from Medicare administrative claims data and
as a result, participation would not require use of a data submission
mechanism.
In response to public comments, as detailed in section II.E.5.e.(2)
of this final rule with comment period, we are lowering the weight of
the cost performance category in the MIPS final score from 10 percent
in the proposed rule to 0 percent for the transition year (MIPS payment
year 2019). We are finalizing a weight of 10 percent for MIPS payment
year 2020. For MIPS payment year 2021 and beyond, the cost performance
category will have a weight of 30 percent of the final score as
required by section 1848(q)(5)(E)(i) of the Act. Reducing the weight of
the cost performance category provides MIPS eligible clinicians and
groups the opportunity to better understand the cost measures in MIPS
without an effect on their payments, especially the impact of
adjustments to the attribution methodologies and their performance
based on the MIPS decile scoring system. We are also limiting the cost
measures finalized for the CY 2017 performance period to those that
have been included in the VM or the 2014 sQRUR and that are reliable
for both individual and group reporting. We plan to continue developing
care episode groups, patient condition groups, and patient relationship
categories (and codes for such groups and categories). We plan to
incorporate new measures as they become available and will give the
public the opportunity to comment on these provisions through future
notice and comment rulemaking.
The following is a summary of the comments we received on the
general provisions of cost measurement within the MIPS program.
Comment: Several commenters supported the inclusion of cost
measures as part of the MIPS program, noting the important role of
clinicians in ordering services and managing care so as to avoid
unnecessary services.
Response: We thank the commenters for their support and believe
that cost is an important element of the MIPS program, reflecting the
key role of clinicians in guiding care decisions. However, we also
consider it important to phase in cost measurement. Therefore, we are
limiting the number of cost measures for the CY 2017
[[Page 77163]]
performance period and lowering the weight of the cost performance
category to 0 percent in the final score for the transition year, 10
percent in the second MIPS payment year, and 30 percent in the third
and following MIPS payment years.
Comment: Several commenters noted concern with the inclusion of
cost measures in MIPS because it could cause unethical behavior and
improper reductions in care, and clinicians control only a small part
of healthcare costs. Some commenters noted that clinicians do not
determine the costs of services such as hospital visits, durable
medical equipment, or prescription drugs. Others asked that cost
measures should only be used when there is a direct tie to quality
measurement.
Response: We agree that cost should be considered in the context of
quality. The statutory design of the final score incorporates both
quality and cost such that they are linked in the clinician's overall
assessment in MIPS. We recognize that clinicians do not personally
provide, order, or determine the price of all of the individual
services in the cost measures, but we do believe that clinicians do
have an effect on the volume and type of services that are provided to
a patient through better coordination of care and improved outcomes. We
plan to continue to assess best methods for attributing cost to MIPS
eligible clinicians.
Comment: Many commenters supported cost measures being calculated
using claims data so as not to add additional reporting burden. Some
commenters expressed concern with cost measures solely calculated based
on claims and suggested that CMS consider other measures, such as
appropriate use criteria or elements of Choosing Wisely.
Response: We agree that claims data can provide valuable
information on cost and this method has the advantage of not requiring
additional reporting from MIPS eligible clinicians. We appreciate that
there are some potential measures related to cost that would not
necessarily be calculated using claims. Some of these measures, such as
appropriate use measures, are included, as appropriate, in the quality
and improvement activity performance categories. We will take into
consideration the commenter's suggestion related to elements of the
Choosing Wisely measures in the future and determine whether they may
be considered as cost measures.
Comment: Several commenters expressed concern that the proposed
measures for the cost performance category did not adequately adjust
costs to account for the risks associated with different types of
patients. They commented that the measures do not adjust for the
socioeconomic status, patient compliance, or other non-health factors
that might contribute to spending. Many of these commenters encouraged
socioeconomic status to be included as a risk adjustment variable for
individual measures or the entire program.
Response: We note that we are establishing, in this final rule with
comment period, the cost performance category weight as 0 percent of
the final score for the transition year (MIPS payment year 2019) to
allow MIPS eligible clinicians to gain experience with these measures
in MIPS. Although we believe the measures are valid and reliable, we
will continue to evaluate the potential impact of risk factors,
including socioeconomic status, on cost measure performance. Please see
section II.E.5.b.(3) for a discussion of the integration of the
findings of the ASPE report on socioeconomic factors into the overall
MIPS program in the future.
Comment: Several commenters expressed concern that the risk
adjustment methods used in the cost performance category would not
adequately address the issues of their particular specialty or field of
medicine. Many recommended that they only be compared to clinicians who
had the same specialty.
Response: We will continue to explore methods to refine our risk
adjustment methods to accommodate the different types of patients
treated by clinicians in the Medicare system. We are applying a
specialty adjustment to the total per capita cost measure because we
found, when implementing this measure as part of the VM, that there
were widely divergent costs among patients treated by various
specialties that were not addressed by other risk adjustment methods.
The other measures we are including in the cost performance category
for the CY 2017 performance period accommodate clinical differences in
other ways. The MSPB measure is adjusted on the basis of the index
admission diagnosis-related groups (DRGs), which is likely to differ
based on the specialty of the clinician attributed to the measure. The
episode-based measures are triggered on the basis of the provision of a
service that identifies a type of patient who is often seen by a
certain specialty or limited number of specialties and this concurrent
risk adjustment is an effective predictor of episode cost. We believe
that the adjustments contained in these measures adequately
differentiate patient populations by different specialties and we will
continue to investigate methods to ensure that the unique attributes of
various medical specialties are appropriately accounted for within the
program.
Comment: Some commenters expressed concern that cost measures would
discourage the development of new therapies. One commenter suggested
that CMS not include the costs of new technology within cost measures.
Response: We wish to ensure that cost measurement does not hinder
the appropriate uptake of new technologies. One challenge of new
technologies is that the costs are not represented in the historical
benchmarks. However, we are finalizing a policy to create benchmarks
for the cost measures based on the performance period, so the
benchmarks will build in the costs associated with adoption of new
technologies in that period. We also anticipate that new technologies
may reduce the need for other services, which could further reduce the
cost of care. We believe that excluding new technology from the cost
measures is not appropriate when the technology is being paid for by
the Medicare program and its beneficiaries, but we will continue to
monitor this issue to determine whether adjustments should be made in
the future.
(b) MACRA Requirements
Section 1848(q)(2)(A)(ii) of the Act establishes cost as a
performance category under the MIPS. Section 1848(q)(2)(B)(ii) of the
Act describes the measures of the cost performance category as the
measurement of resource use for a MIPS performance period under section
1848(p)(3) of the Act, using the methodology under section 1848(r) of
the Act as appropriate, and, as feasible and applicable, accounting for
the cost of drugs under Part D.
As discussed in section II.E.5.e.(1)(c) of the proposed rule, we
previously established in rulemaking the VM, as required by section
1848(p) of the Act, that provides for differential payment to a
physician or a group of physicians (and EPs as the Secretary determines
appropriate) under the PFS based on the quality of care furnished
compared to cost. For the evaluation of costs of care, section
1848(p)(3) of the Act refers to appropriate measures of costs
established by the Secretary that eliminate the effect of geographic
adjustments in payment rates and take into account risk factors (such
as socioeconomic and demographic characteristics, ethnicity, and health
status of individuals, such as to recognize that less healthy
individuals
[[Page 77164]]
may require more intensive interventions) and other factors determined
appropriate by the Secretary.
Section 1848(r) of the Act specifies a series of steps and
activities for the Secretary to undertake to involve the physician,
practitioner, and other stakeholder communities in enhancing the
infrastructure for cost measurement, including for purposes of MIPS and
APMs. Section 1848(r)(2) of the Act requires the development of care
episode and patient condition groups, and classification codes for such
groups. That section provides for care episode and patient condition
groups to account for a target of an estimated one-half of expenditures
under Medicare Parts A and B (with this target increasing over time as
appropriate). We are required to take into account several factors when
establishing these groups. For care episode groups, we must consider
the patient's clinical issues at the time items and services are
furnished during an episode of care, such as clinical conditions or
diagnoses, whether or not inpatient hospitalization occurs, the
principal procedures or services furnished, and other factors
determined appropriate by the Secretary. For patient condition groups,
we must consider the patient's clinical history at the time of a
medical visit, such as the patient's combination of chronic conditions,
current health status, and recent significant history (such as
hospitalization and major surgery during a previous period), and other
factors determined appropriate. We are required to post on the CMS Web
site a draft list of care episode and patient condition groups and
codes for solicitation of input from stakeholders, and subsequently,
post on the CMS Web site an operational list of such groups and codes.
As required by section 1848(r)(2)(H) of the Act, no later than November
1 of each year (beginning with 2018), the Secretary shall, through
rulemaking, revise the operational list as the Secretary determines may
be appropriate.
To facilitate the attribution of patients and episodes to one or
more clinicians, section 1848(r)(3) of the Act requires the development
of patient relationship categories and codes that define and
distinguish the relationship and responsibility of a physician or
applicable practitioner with a patient at the time of furnishing an
item or service. These categories shall include different relationships
of the clinician to the patient and reflect various types of
responsibility for and frequency of furnishing care. We are required to
post on the CMS Web site a draft list of patient relationship
categories and codes for solicitation of input from stakeholders, and
subsequently, post on the CMS Web site an operational list of such
categories and codes. As required by section 1848(r)(3)(F) of the Act,
not later than November 1 of each year (beginning with 2018), the
Secretary shall, through rulemaking, revise the operational list as the
Secretary determines may be appropriate.
Section 1848(r)(4) of the Act requires that claims submitted for
items and services furnished by a physician or applicable practitioner
on or after January 1, 2018, shall, as determined appropriate by the
Secretary, include the applicable codes established for care episode
groups, patient condition groups, and patient relationship categories
under sections 1848(r)(2) and (3) of the Act, as well as the NPI of the
ordering physician or applicable practitioner (if different from the
billing physician or applicable practitioner).
Under section 1848(r)(5) of the Act, to evaluate the resources used
to treat patients, the Secretary shall, as determined appropriate, use
the codes reported on claims under section 1848(r)(4) of the Act to
attribute patients to one or more physicians and applicable
practitioners and as a basis to compare similar patients, and conduct
an analysis of resource use. In measuring such resource use, the
Secretary shall use per patient total allowed charges for all services
under Medicare Parts A and B (and, if the Secretary determines
appropriate, Medicare Part D) and may use other measures of allowed
charges and measures of utilization of items and services. The
Secretary shall seek comments through one or more mechanisms (other
than notice and comment rulemaking) from stakeholders regarding the
resource use methodology established under section 1848(r)(5) of the
Act.
On October 15, 2015, as required by section 1848(r)(2)(B) of the
Act, we posted on the CMS Web site for public comment a list of the
episode groups developed under section 1848(n)(9)(A) of the Act with a
summary of the background and context to solicit stakeholder input as
required by section 1848(r)(2)(C) of the Act. That posting is available
at https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html. The public comment period closed on February 15, 2016.
(c) Relationship to the Value Modifier
Currently, the VM established under section 1848(p) of the Act
utilizes six cost measures (see 42 CFR 414.1235): (1) A total per
capita costs for all attributed beneficiaries measure (which we will
refer to as the total per capita cost measure); (2) a total per capita
costs for all attributed beneficiaries with chronic obstructive
pulmonary disease (COPD) measure; (3) a total per capita costs for all
attributed beneficiaries with congestive heart failure (CHF) measure;
(4) a total per capita costs for all attributed beneficiaries with
coronary artery disease (CAD) measure; (5) a total per capita costs for
all attributed beneficiaries with diabetes mellitus (DM) measure; and
(6) an MSPB measure.
Total per capita costs (measures 1-5) and the MSPB measure include
payments under both Medicare Part A and Part B, but do not include
Medicare payments under Part D for drug expenses. Cost measures for the
VM are attributed at the physician group and solo practice level using
the Medicare-enrolled billing TIN. They are risk adjusted and payment
standardized, and the expected cost is adjusted for the TIN's specialty
composition. We refer readers to our discussions of these total per
capita cost measures (76 FR 73433 through 73434, 77 FR 69315 through
69316), MSPB measure (78 FR 74774 through 74780, 80 FR 71295 through
71296), payment standardization methodology (77 FR 69316 through
69317), risk adjustment methodology (77 FR 69317 through 69318), and
specialty adjustment methodology (78 FR 74781 through 74784) in earlier
rulemaking for the VM. More information about these measures may be
found in documents under the links titled ``Measure Information Form:
Overall Total Per Capita Cost Measure,'' ``Measure Information Form:
Condition-Specific Total Per Capita Cost Measures,'' and ``Measure
Information Form: Medicare Spending Per Beneficiary Measure'' available
at https://www.cms.gov/medicare/medicare-fee-for-service-payment/physicianfeedbackprogram/valuebasedpaymentmodifier.html.
The total per capita cost measures use a two-step attribution
methodology that is similar to, but not exactly the same, as the
assignment methodology used for the Shared Savings Program. The
attribution focuses on the delivery of primary care services (77 FR
69320) by both primary care clinicians and specialists. The MSPB
measure has a different attribution methodology. It is attributed to
the TIN that provides the
[[Page 77165]]
plurality of Medicare Part B claims (as measured by allowed charges)
during the index inpatient hospitalization. We refer readers to the
discussion of our attribution methodologies (77 FR 69318 through 69320,
79 FR 67960 through 67964) in prior rulemaking for the VM.
These total per capita cost measures include payments for a
calendar year and have been reported to TINs for several years through
the Quality and Resource Use Reports (QRURs), which are issued as part
of the Physician Feedback Program under section 1848(n) of the Act. The
total per capita cost measures have been used in the calculation of the
VM payment adjustments beginning with the 2015 payment adjustment
period and the MSPB measure has been used in the calculation of the VM
payment adjustments beginning with the 2016 payment adjustment period.
More information about the current attribution methodology for these
measures is available in the ``Fact Sheet for Attribution in the Value-
Based Payment Modifier Program'' document available at https://www.cms.gov/medicare/medicare-fee-for-service-payment/physicianfeedbackprogram/valuebasedpaymentmodifier.html.
In the MIPS and APMs RFI (80 FR 59102 through 59113), we solicited
feedback on the cost performance category. A summary of those comments
is located in the proposed rule (81 FR 28198).
(2) Weighting in the Final Score
As required by section 1848(q)(5)(E)(i)(II)(bb) of the Act, the
cost performance category shall make up no more than 10 percent of the
final score for the first MIPS payment year (CY 2019) and not more than
15 percent of the final score the second MIPS payment year (CY 2020).
Therefore, we proposed at Sec. 414.1350 that the cost performance
category would make up 10 percent of the final score for the first MIPS
payment year (CY 2019) and 15 percent of the final score for the second
MIPS payment year (CY 2020) (81 FR 28384). As required by section
1848(q)(5)(E)(i)(II)(aa) of the Act and proposed at Sec. 414.1350 (81
FR 28384), starting with the third MIPS payment year and for each MIPS
payment year thereafter, the cost performance category would make up 30
percent of the final score.
The following is a summary of the comments we received regarding
our proposals for the cost performance category weight in the final
score for the first and second MIPS payment years.
Comment: Several commenters supported the weighting of the cost
performance category as 10 percent of the MIPS final score for 2019.
However, we also had many commenters that encouraged us to reduce the
weight of the cost performance category to as low as 0 percent for 2019
due to lack of familiarity with cost measures. Other commenters
recommended a delay in the inclusion of the cost performance category
within the final score because attribution methods did not properly
identify the clinician who was responsible for the care and patients
could be attributed to clinicians who had little influence on their
overall care. Others recommended delay because risk adjustment methods
based on administrative data could not properly capture the clinical
risk differences among patients, placing clinicians who see more
complex patients at a disadvantage. Others noted that more time was
needed to perfect cost measures. Others recommended that cost measures
be attributed to only those clinicians who volunteer to participate in
a pilot in the transition year.
Response: Clinicians have received feedback on cost measures
through the VM and the Physician Feedback Program reports for a number
of years; however, we agree that clinicians may need time to become
familiar with cost measures in MIPS. The VM calculation and the
Physician Feedback Program are different in two significant ways from
the proposed approach to cost measurement in the MIPS. The first major
difference is that we proposed to attribute measures at the TIN/NPI
level for those submitting as individuals rather than at the TIN level
used for the VM. While this would not make a difference for those in
solo practice, it would present a significant change for those that
practice in groups and participate in MIPS as individuals. In MIPS, we
have finalized a policy in section II.E.5.a.(2) of this rule that those
that elect to participate in MIPS as groups, must be assessed for all
performance categories as groups. Conversely, those that elect to
participate in MIPS as individual clinicians will be measured on all
four performance categories as an individual. With the exception of
solo practitioners (defined for the VM as a single TIN with one EP
identified by an NPI billing under the TIN), the VM evaluates
performance at the aggregate group level. For example, a surgeon in a
multi-specialty group who elects to participate in MIPS as an
individual would receive feedback on the cost measures attributed to
him or her individually as opposed to that of the entire group. Second,
as discussed in section II.E.5.e.(3)(c) of this final rule with comment
period, to facilitate participation at the individual level, we will
attribute cases at the TIN/NPI level, rather than at the TIN level, as
is done currently under the VM. Even for groups that have received
QRURs on cost measures under the VM, this global change to the
attribution logic is likely to change the attributed cases, which in
turn could affect their performance on cost measures.
In addition, as discussed in section II.E.6.a.(3) of this final
rule with comment period, scoring for the cost performance category
under MIPS is different from the VM because it is based on performance
within a decile system as opposed to the quality-tiering scoring system
used in the VM. A group or solo practitioner that scored in the average
range under the VM quality-tiering methodology may be scored ``above
average'' or ``below average'' in MIPS because of the difference in the
scoring methods. We believe it is important for this transition year
for MIPS eligible clinicians to have the opportunity to become familiar
with the attribution changes and the scoring changes by receiving
performance feedback showing what their performance on the cost
measures will look like under the MIPS attribution and scoring rules
before cost measures affects payment.
Section 1848(q)(5)(E)(i)(II)(bb) of the Act provides that for the
first and second MIPS payment years, ``not more than'' 10 percent and
15 percent, respectively, of a MIPS eligible clinician's final score
shall be based on performance in the cost performance category.
Accordingly, we believe that the statute affords discretion to adopt a
weighting for the cost performance category lower than 10 percent and
15 percent for the first and second payment years, respectively. For
these reasons described above, we believe that a transition period
would be appropriate; we are lowering the weight of the cost
performance category for the first and second MIPS payment years. We
are not finalizing our proposal for a weighting of 10 percent for the
transition year and 15 percent for the second MIPS payment year.
Instead we are finalizing a weighting of 0 percent for the transition
year and 10 percent for the second MIPS payment year.
We are not reducing the weight of the cost category due to concerns
with attribution, risk adjustment, or the measure specifications. We
intend to continue improving all aspects of the cost measures, but we
believe our final methods are sound. However, due to the changes in
scoring and attribution, we
[[Page 77166]]
agree that MIPS eligible clinicians should have more time to become
familiar with these measures in the context of MIPS. Finally, we do not
believe we should restrict the cost performance category to a pilot.
MIPS eligible clinicians are not required to submit data and the cost
performance category does not contribute to the final score for the
transition year. Therefore, we will calculate a cost performance
category score for all MIPS eligible clinicians for whom we can
reliably calculate a score.
Comment: Many commenters encouraged CMS to defer assigning any
weight to the cost performance category for MIPS until patient
relationship codes have been in use.
Response: Section 1848(r)(3) of the Act requires us to develop
patient relationship categories and codes that define and distinguish
the relationship and responsibility of a physician or applicable
practitioner with a patient. We are currently reviewing comments
received on the draft list of patient relationship categories and will
post an operational list of these categories and codes in April 2017.
We disagree with commenters that we should wait until the patient
relationship codes are in use before measuring cost. While we believe
that these patient relationship codes can be an important contributor
to better clarifying the particular role of a clinician in patient
care, these codes will not be developed in time for the first MIPS
performance period. Moreover, section 1848(r)(4) directs that such
codes shall be included, as determined appropriate by the Secretary, on
claims for items and services furnished on or after January 1, 2018.
Following their inclusion on claims, we will need time to evaluate how
best to incorporate those codes into cost measures. While this
additional analysis of patient relationship codes takes place, the cost
performance category will remain an important part of the MIPS. In
their current form, we find the cost measures adopted in this final
rule with comment period both reliable and valid.
After consideration of the comments, we believe that a transition
period for measuring cost would be appropriate; therefore, we are not
finalizing the weighting of the cost performance category in the MIPS
final score as proposed. Instead, we are finalizing at Sec.
414.1350(b) a weighting of 0 percent for the 2019 MIPS payment year and
10 percent for the 2020 MIPS payment year. Starting with the 2021 MIPS
payment year, the cost performance category will be weighted at 30
percent, as required by section 1848(q)(5)(E)(i)(II)(aa) of the Act. We
recognize that the individual attribution of cost measures for those
MIPS eligible clinicians in group practices and the new MIPS scoring
system is a change for clinicians and we would like to give them an
opportunity to gain experience with the cost measures before increasing
the weight of the performance category within the final score.
(3) Cost Criteria
As discussed in section II.E.5.a. of the proposed rule (81 FR
28181), performance in the cost performance category would be assessed
using measures based on administrative Medicare claims data. We did not
propose any additional data submissions for the cost performance
category. As such, MIPS eligible clinicians and groups would be
assessed based on cost for Medicare patients only and only for patients
that are attributed to them. MIPS eligible clinicians or groups that do
not have enough attributed cases to meet or exceed the case minimums
proposed in sections II.E.5.e.(3)(a)(ii) and II.E.5.e.(3)(b)(ii) of the
proposed rule would not be measured on cost. For more discussion of
MIPS eligible clinicians and groups without a cost performance category
score, please refer to II.E.6.a.(3)(d) and II.E.6.b.(2) of this final
rule with comment period.
(a) Value Modifier Cost Measures Proposed for the MIPS Cost Performance
Category
For purposes of assessing performance of MIPS eligible clinicians
on the cost performance category, we proposed at Sec. 414.1350(a) to
specify cost measures for a performance period (81 FR 28384). For the
CY 2017 MIPS performance period, we proposed to utilize the total per
capita cost measure, the MSPB measure, and several episode-based
measures discussed in section II.E.5.e.(3)(b). of the proposed rule (81
FR 28200) for the cost performance category. The total per capita costs
measure and the MSPB measure are described in section II.E.5.e.(1)(c)
of the proposed rule (81 FR 28197). We proposed including the total per
capita cost measure as it is a global measure of all Medicare Part A
and Part B resource use during the MIPS performance period and
inclusive of the four condition-specific total per capita cost measures
under the VM (chronic obstructive pulmonary disease, congestive heart
failure, coronary artery disease, and diabetes mellitus) for which
performance tends to be correlated and its inclusion was supported by
commenters on the MIPS and APMs RFI (80 FR 59102 through 59113). We
also anticipate that MIPS eligible clinicians are familiar with the
total per capita cost measure as the measure has been in the VM since
2015 and feedback has been reported through the annual QRUR to all
groups starting in 2014.
We proposed to adopt the MSPB measure because by the beginning of
the initial MIPS performance period in 2017, we believe most MIPS
eligible clinicians will be familiar with the measure in the VM or its
variant under the Hospital Value-Based Purchasing (VBP) Program.
However, we proposed two technical changes to the MSPB measure
calculations for purposes of its adoption in MIPS which were discussed
in the proposed rule at 81 FR 28200.
We proposed to use the same methodologies for payment
standardization, and risk adjustment for these measures for the cost
performance category as are defined for the VM. For more details on the
previously adopted payment standardization methodology, see 77 FR 69316
through 69317. For more details on the previously adopted risk
adjustment methodology, see 77 FR 69317 through 69318.
We did not propose to include the four condition-specific total per
capita cost measures (chronic obstructive pulmonary disease, congestive
heart failure, coronary artery disease, and diabetes mellitus).
Instead, we generally proposed to assess performance in part using the
episode-based measures (81 FR 28200). This shift is in response to
feedback received as part of the MIPS and APMs RFI (80 FR 59102 through
59113). In the MIPS and APMs RFI, commenters stated that they do not
believe the existing condition-specific total per capita cost measures
under the VM are relevant to their practice and expressed support for
episode-based measures under MIPS.
The following is summary of the comments we received regarding our
proposal to include the total per capita cost measure and MSPB measure
as cost measures.
Comment: Several commenters supported the inclusion of the total
per capita cost measure.
Response: We will include the total per capita cost measure in the
CY 2017 performance period.
Comment: Several commenters opposed the inclusion of the total per
capita cost measure because it was developed to measure hospitals.
Response: We believe that the commenters may have confused the
total per capita cost measure with the MSPB measure, which was
originally developed for use in the Hospital Value
[[Page 77167]]
Based Purchasing program and is triggered on the basis of an index
admission. The total per capita cost measure was not developed for nor
ever used to measure quality or cost by a hospital in a Medicare
program. Many patients who are attributed under the total per capita
cost measure are not admitted to a hospital in a calendar year. The
total per capita cost measure has been a part of the VM program since
inception.
Comment: A commenter opposed the inclusion of the total per capita
cost measure because it focused on primary care.
Response: The MIPS program aims to measure the cost of all
clinicians, both primary care and specialists. While the total per
capita cost measure may be more likely to be attributed to clinicians
that provide primary care and uses a primary care attribution method,
other measures may be more likely to be attributed to specialists.
Including a diversity of measures allows the program to measure all
types of clinicians.
Comment: A commenter opposed the inclusion of the total per capita
cost measure and instead urged CMS to speed development of episode-
based measures.
Response: We plan to incorporate episode-based measures within the
cost performance category of the MIPS program. We proposed to include
41 episode-based measures for the CY 2017 performance period (81 FR
28200) and plan to continue to develop more episode groups. However, we
believe there is value to continue to include the total per capita cost
measure as well. Not all patients will necessarily be attributed in
episode-based measures and the total per capita cost measure is the
best current measure of all patients.
Comment: A commenter supported the CMS decision not to propose for
the cost performance category the four condition-specific total per
capita cost measures that are used in the Value Modifier because they
are duplicative of the total per capita cost measure covering all
patients. Several commenters recommended that the four condition-
specific total per capita cost measures be used in the cost performance
category.
Response: We intend to use episode-based measures for specific
disease focus areas in future years. We believe that the design of
episode-based measures which incorporate clinical input and distinguish
related from unrelated services will better allow clinicians to improve
performance on a particular population of patients. We will not include
the four condition-specific total per capita cost measures in MIPS.
Comment: Several commenters opposed the inclusion of a specialty
adjustment within the total per capita cost measure because this
adjustment would reward specialties that provide more expensive
treatments.
Response: The specialty adjustment for the total per capita cost
measure has been used since the 2016 VM, which was based on 2014 data.
We reviewed the different expected costs associated with various
specialties as part of the CY 2014 PFS rulemaking and found substantial
differences in average costs for attributed patients. For example,
specialties such as medical oncology tend to treat relatively costly
beneficiaries and bill for expensive Part B drugs but other specialties
such as dermatology tend to treat low cost patients. Although cost data
are adjusted to account for differences in patient characteristics, the
effects of this adjustment do not fully account for the differences in
costs associated with different specialties under this measure;
therefore, we believe this adjustment is still warranted in MIPS. We
are open to ways to improve the risk adjustment of this measure in the
future to ensure that it appropriately evaluates all specialties of
medicine.
Comment: Several commenters supported the inclusion of a specialty
adjustment within the total per capita cost measure because patients
who become sick often seek more care from specialists and their
expected costs would not be reflected within the risk adjustment
methodology.
Response: We believe the specialty adjustment is a necessary
element of the total per capita cost measure. The MSPB and episode-
based measures are designed with expected costs based in part on the
clinical condition or procedure that triggers an episode. However, the
total per capita cost measure is risk adjusted only on the basis of
clinical conditions before the performance period. This risk adjustment
cannot completely accommodate changes in source of care that are the
result of new onset illness during the performance period. The
specialty adjustment helps to accommodate for the differences in the
types of patients seen by different specialists.
Comment: A commenter recommended that costs associated with a
hospital visit should not be included in the total per capita cost
measure because multiple physicians are often involved.
Response: We do not believe that excluding hospital services from
the total per capita cost measure would be consistent with an overall
focus on care coordination that may extend to periods when a patient is
hospitalized.
Comment: Several commenters supported the inclusion of the MSPB
Measure.
Response: We believe that this measure is both familiar to
clinicians from use in the VM and QRUR and reflects a period of care in
which a clinician may be able to influence cost. We will finalize the
MSPB measure.
Comment: Several commenters opposed the inclusion of the MSPB
measure because it was developed to measure hospitals. Others suggested
that it not be included in MIPS until it had been analyzed for use in a
clinician program. Several comments opposed the inclusion of the MSPB
measure because it focuses on primary care. Other commenters suggested
the episode-based measures better measured specialists.
Response: While this measure was originally used as part of the
Hospital Value-Based Purchasing program, the MSPB measure has also been
used in the VM, a clinician program, since 2016 and we continue to
believe that the clinician who provides a significant number of
services during a hospital visit also has some responsibility for
overall cost. We also see value in using common measures to create
parallel incentives for hospitals and MIPS eligible clinicians to
coordinate care and achieve efficiencies. We believe that the MSPB
measure will be attributed to all clinicians who provide significant
care in the hospital, including specialists and primary care clinicians
to the extent which they admit patients to the hospital. If a clinician
does not provide hospital services, that clinician will not be
attributed any cases to be scored on the measure.
Comment: Several commenters expressed concern that cost measures
could attribute patients for services before they are seen by the
clinician to whom they are attributed. For example, a clinician could
take over responsibility for primary care of a patient who had
experienced health difficulties in the earlier part of the year that
resulted in emergency room visits and hospital admissions that were
partly due to the result of a lack of care coordination. This patient
may not have had more than one visit with a particular clinician before
this new clinician took over, resulting in all costs being attributed
to the individual once he or she billed for two office visits for that
patient.
Response: Our attribution methods aim to measure the influence of a
[[Page 77168]]
clinician on the cost of care of his or her patients. In some cases,
certain elements within the cost measure may not be directly related to
the performance of the attributed clinician. We aim to address this by
requiring a minimum case volume and risk adjusting so that clinicians
are compared on the basis of similar patient populations. We will
continue to work with stakeholders to improve cost measures.
Comment: Several commenters noted that the same costs could be
included in the total per capita cost measure, the MSPB measure, and
the episode-based measures and suggested that costs should only be
counted once for an individual physician.
Response: We believe that attempting to remove costs from one
measure because they are reflected in another measure would make it
much harder for clinicians to understand their overall performance on
measures within the cost performance category. Measures are constructed
to capture various components of care. In some cases, a clinician or
group may provide primary care or episodic care for the same patient
and we believe that costs should be considered in all relevant measures
to make the measure performance comparable between MIPS eligible
clinicians.
Comment: One commenter recommended that CMS use a total cost of
care measure developed using a different methodology that is not
limited to Medicare and instead captures data from all payer claims
databases.
Response: We are unaware of a national data source that would allow
us to accurately capture cost data for payers. Therefore, we are
limited to using Medicare cost data for the total per capita cost
measure. Following our consideration of the comments, we will finalize
our proposal to include the total per capita cost measure and the MSPB
measure within the MIPS cost performance category for the CY 2017
performance period. We believe these measures have the advantage of
having been used within the VM and covering a broad population of
patients.
(i) Attribution
In the VM, all cost measures are attributed to a TIN. In MIPS,
however, we proposed to evaluate performance at the individual and
group levels. Please refer to section II.E.5.e.(3)(c) of this rule for
our discussion to address attribution differences for individuals and
groups. For purposes of this section, we will use the general term MIPS
eligible clinicians to indicate attribution for individuals or groups.
For the MSPB measure, we proposed to use attribution logic that is
similar to what is used in the VM. MIPS eligible clinicians with the
plurality of claims (as measured by allowed charges) for Medicare Part
B services, rendered during an inpatient hospitalization that is an
index admission for the MSPB measure during the applicable performance
period would be assigned the episode. The only difference from the VM
attribution methodology would be that the MSPB measure would be
assigned differently for individuals than for groups. For the total per
capita cost measure, we proposed to use a two-step attribution
methodology that is similar to the methodology used in the 2017 and
2018 VM. We also proposed to have the same two-step attribution process
for the claims-based population measures in the quality performance
category (81 FR 28192), CMS Web Interface measures, and CAHPS for MIPS.
However, we also proposed to make some modifications to the primary
care services definition that is used in the attribution methodology to
align with policies adopted under the Shared Savings Program.
The VM currently defines primary care services as the set of
services identified by the following Healthcare Common Procedure Coding
System (HCPCS)/CPT codes: 99201 through 99215, 99304 through 99340,
99341 through 99350, the welcome to Medicare visit (G0402), and the
annual wellness visits (G0438 and G0439). We proposed to update this
set to include new care coordination codes that have been implemented
in the PFS: Transitional care management (TCM) codes (CPT codes 99495
and 99496) and the chronic care management (CCM) code (CPT code 99490).
These services were added to the primary care service definition used
by the Shared Saving Program in June 2015 (80 FR 32746 through 32748).
We believe that these care coordination codes would also be appropriate
for assigning services in the MIPS.
In the CY 2016 PFS final rule, the Shared Saving Program also
finalized another modification to the primary care service definition:
To exclude nursing visits that occur in a skilled nursing facility
(SNF) (80 FR 71271 through 71272). Patients in SNFs (place of service
(POS) 31) are generally shorter stay patients who are receiving
continued acute medical care and rehabilitative services. While their
care may be coordinated during their time in the SNF, they are then
transitioned back to the community. Patients in a SNF (POS 31) require
more frequent practitioner visits--often from 1 to 3 times a week. In
contrast, patients in nursing facilities (NFs) (POS 32) are almost
always permanent residents and generally receive their primary care
services in the facility for the duration of their life. Patients in
the NF (POS 32) are usually seen every 30 to 60 days unless medical
necessity dictates otherwise. We believe that it would be appropriate
to follow a similar policy in MIPS; therefore, we proposed to exclude
services billed under CPT codes 99304 through 99318 when the claim
includes the POS 31 modifier from the definition of primary care
services.
We believe that making these two modifications would help align the
primary care service definition between MIPS and Shared Savings Program
and would improve the results from the two-step attribution process.
We note, however, that while we are aligning the definition for
primary care services, the two-step attribution for MIPS would be
different from the one used for the Shared Saving Program. We believe
there are valid reasons to have differences between MIPS and the Shared
Savings Program attribution. For example, as discussed in CY 2015 PFS
final rule (79 FR 67960 through 67962), we eliminated the primary care
service pre-step that is statutorily required for the Shared Savings
Program from the VM. We noted that without the pre-step, the
beneficiary attribution method would more appropriately reflect the
multiple ways in which primary care services are provided, which are
not limited to physician groups. As MIPS eligible clinicians include
more than physicians, we continue to believe it is appropriate to
exclude the pre-step.
In addition, in the 2015 Shared Savings Program final rule, we
finalized a policy for the Shared Savings Program that we did not
extend to the VM two-step attribution: To exclude select specialties
(such as several surgical specialties) from the second attribution step
(80 FR 32749 through 32754). We do not believe it is appropriate to
restrict specialties from the second attribution step for MIPS. If such
a policy were adopted under MIPS, then all specialists on the exclusion
list, unless they were part of a multispecialty group, would
automatically be excluded from measurement on the total per capita cost
measure, as well as on claims-based population measures which rely on
the same two-step attribution. While we do not believe that many MIPS
eligible clinicians or groups with these specialties would be
attributed enough cases to meet or exceed the case minimum, we believe
that an automatic exclusion could remove some MIPS eligible clinicians
[[Page 77169]]
and groups that should be measured for cost.
We requested comments on these proposed changes.
The following is a summary of the comments we received regarding
our proposal to use the attribution methods from the VM for the MSPB
and total per capita cost measure with changes to the definition of
primary care services.
Comment: Some commenters recommended that attribution be based in
part on a patient attestation of their relationship with a clinician.
Response: We do not currently have a method for patients to attest
to their relationship with a clinician so are unable to incorporate
this mechanism into cost measures at this time. We will continue to
work on improving attribution.
Comment: Several commenters opposed the attribution method used in
the MSPB of assigning patients to all physicians who provided at least
30 percent of inpatient care, indicating that the attribution method
had not been fully tested.
Response: The MSPB measure attributes patients to the clinician
that provided the plurality of Medicare Part B charges during the index
admission, not to all clinicians who provide at least 30 percent of
inpatient care. We believe that this method is the best way to identify
the single clinician who most influenced the care during a given
hospital admission.
Comment: A commenter supported the exclusion of skilled nursing
facility codes from the list of codes used to attribute the total per
capita cost measure because patients in skilled nursing facilities
require high intensity time-limited care.
Response: We are finalizing the exclusion of skilled nursing
facility codes as proposed.
Comment: Several commenters expressed concern that incident-to
billing practices, in which physicians bill for services provided by
other clinicians such as nurse practitioners or physician assistants,
obscure the actual clinician providing care and make attribution
difficult. A commenter suggested that a new modifier be created to
indicate when a service was provided under incident-to rules.
Response: ``Incident to'' billing is allowed, consistent with Sec.
410.26 of our regulations, when auxiliary personnel provide services
that are an integral, though incidental, part of the service of a
clinician, and are commonly furnished without charge or included in the
bill of a clinician. ``Incident to'' services are furnished under the
supervision of the billing clinician, and with certain narrow
exceptions, under direct supervision. These services are billed and
paid under the PFS as if the billing clinician personally furnished the
service. We recognize that some services of certain MIPS eligible
clinicians may be billed as incident to the services of others.
However, given that the billing clinician provides the requisite
supervision and bills for the service as if it was personally
furnished, we do not believe ``incident to'' billing interferes with
appropriate attribution of services. If this is a concern for certain
MIPS eligible clinicians, we believe billing practices could be
adjusted such that services are billed by the individual MIPS eligible
clinician who provides the service.
Comment: A commenter expressed concern that attributing care to a
single professional or group for costs could cause compartmentalization
of care.
Response: The cost measures that are used in MIPS aim to measure
how a particular clinician or group impacts a patient's cost, both
directly or indirectly. We have aimed to design a program that
encourages more consideration of the costs of care associated with
patients even after other clinicians become involved, so the measures
require that clinicians who are most significantly responsible for
their care, as measured by Medicare allowed amounts, assume
accountability for it. We believe this system will encourage more
coordination of care and consideration of cost.
Comment: A commenter opposed the inclusion of transition care
management within the list of codes used to attribute the total per
capita cost measure, noting that these codes are often used by
specialists that may not have overall responsibility for care.
Response: We believe that those clinicians who are billing for
transitional care management are providing significant services that
reflect oversight for a patient. In some cases, the clinician providing
transitional care management is different from the one providing
primary care but in other cases it is the same individual. We believe
that our attribution method of assigning patients to the clinician who
provides the plurality of primary care services (which includes many
services other than transitional care management) is the best method to
attribute the total per capita cost measure. This change is consistent
with the attribution methods that are used in the Shared Savings
Program.
After considering comments, we are finalizing our proposal to use
modified attribution methods from the VM for the total per capita cost
measure and the MSPB. Specifically, we are also finalizing the removal
of skilled nursing facility codes (CPT codes 99304-99318) from and
addition of transitional care management (CPT codes 99495-99496) and
chronic care management codes (CPT code 99490) to the list of primary
care services used to attribute the total per capita cost measure. We
believe that the changes to the attribution methodology allow us to
better identify the clinician or group and the extent of accountability
for total per capita cost.
(ii) Reliability
We seek to ensure that MIPS eligible clinicians and groups are
measured reliably; therefore, we intend to use the 0.4 reliability
threshold currently applied to measures under the VM to evaluate their
reliability. A 0.4 reliability threshold standard means that the
majority of MIPS eligible clinicians and groups who meet the case
minimum required for scoring under a measure have measure reliability
scores that exceed 0.4. We generally consider reliability levels
between 0.4 and 0.7 to indicate ``moderate'' reliability and levels
above 0.7 to indicate ``high'' reliability. In cases where we have
considered high participation in the applicable program to be an
important programmatic objective, such as the Hospital VBP Program, we
have selected this 0.4 moderate reliability standard. We believe this
standard ensures moderate reliability, but does not substantially limit
participation.
To ensure sufficient measure reliability for the cost performance
category in MIPS, we also proposed at Sec. 414.1380(b)(2)(ii) to use
the minimum of 20 cases for the total per capita cost measure (81 FR
28386), the same case minimum that is being used for the VM. An
analysis in the CY 2016 PFS final rule (80 FR 71282) confirms that this
measure has high average reliability for solo practitioners (0.74) as
well as for groups with more than 10 professionals (0.80).
In the CY 2016 PFS final rule, we finalized a policy that increases
the minimum cases for the MSPB measure from 20 to 125 cases (80 FR
71295 through 71296) due to reliability concerns with the measure
including the specialty adjustment. That said, we recognize that a case
size increase of this nature also may limit the ability of MIPS
eligible clinicians to be scored on the MSPB measure, and have been
evaluating alternative measure calculation strategies for potential
inclusion under MIPS that better balance participation, accuracy, and
reliability. As a result of this, we
[[Page 77170]]
proposed two modifications to the MSPB measure.
The first technical change we proposed was to remove the specialty
adjustment from the MSPB measure's calculation. As currently reported
on the QRURs, the MSPB measure is risk adjusted to ensure that these
comparisons account for case-mix differences between practitioners'
patient populations and the national average. It is unclear that the
current additional adjustment for physician specialty improves the
accounting for case-mix differences for acute care patients, and thus,
may not be needed, and as our analysis below indicated, reliability for
the measure improves when then adjustment is removed.
The second technical change we proposed was to modify the cost
ratio used within the MSPB equation to evaluate the difference between
observed and expected episode cost at the episode level before
comparing the two at the individual or group level. In other words,
rather than summing all of the observed costs and dividing by the sum
of all the expected costs, we would take the observed to expected cost
ratio for each MSPB episode assigned to the MIPS eligible clinician or
group and take the average of the assigned ratios. As we did
previously, we would take the average ratio for the MIPS eligible
clinician or group and multiply it by the average of observed costs
across all episodes nationally, in order to convert a ratio to a dollar
amount.
Our analysis, which is based on all Medicare Part A and B claims
data for beneficiaries discharged from an acute inpatient hospital
between January 1, 2013 and December 1, 2013, indicates that these two
changes would improve the MSPB measure's ability to calculate costs and
the accuracy with which it can be used to make clinician-level
performance comparisons. We also believe that these changes would help
ensure the MSPB measure can be applied to a greater number of MIPS
eligible clinicians while still maintaining its status as a reliable
measure. More specifically, our analysis indicated that after making
these changes to the MSPB measure's calculations, the MSPB measure
meets the desired 0.4 reliability threshold used in the VM for over 88
percent of all TINs with a 20-case minimum, including solo
practitioners. While this percentage is lower than our current policy
for the VM (where virtually all TINs with 125 or more episodes have
moderate reliability), setting the case minimum at 20 allows for an
increase in participation in the MSPB measure. Therefore, we proposed
to use a minimum of 20 cases for the MSPB measure (81 FR 28386). As
noted previously, we consider expanded participation of MIPS eligible
clinicians, particularly individual reporters, to be of great import
for the purposes of transitioning to MIPS and believe that this
justifies a slight decrease of the percentage of TINs meeting the
reliability threshold.
We welcomed public comment on these proposals.
The following is summary of the comments we received regarding our
proposals to use a 0.4 reliability threshold and a minimum of 20 cases
for the total per capita cost measure.
Comment: Many commenters expressed concern with the proposed 0.4
reliability threshold for cost measures. Many commenters suggested that
only measures with high reliability (over 0.7 or 0.8) be used within
the program.
Response: We believe that measures with a reliability of 0.4 with a
minimum attributed case size of 20 meet the standards for being
included as cost measures within the MIPS program. We aim to measure
cost for as many clinicians as possible and limiting measures to
reliability of 0.7 or 0.8 would result in few individual clinicians
with attributed cost measures. In addition, a 0.4 reliability threshold
ensures moderate reliability for most MIPS eligible clinicians or group
practices that are being measured on cost.
We will finalize our reliability threshold of 0.4 but will continue
to work to develop measures and improve specifications to ensure the
highest level of reliability feasible within the cost measures in the
MIPS program. We did not receive any specific comments on the our
proposal to use a minimum of 20 cases for the total per capita cost
measure. We are finalizing at Sec. 414.1380(b)(2)(ii) that a MIPS
eligible clinician must meet the minimum case volume specified by CMS
to be scored on a cost measure. Therefore, a MIPS eligible clinician
must have a minimum of 20 cases to be scored on the total per capta
cost measure.
The following is a summary of the comments we received regarding
our proposal to modify the case minimum for the MSPB, the proposal to
remove the specialty adjustment from the MSPB measure's calculation,
and the proposal to modify the cost ratio used within the MSPB
equation.
Comment: Several comments opposed the 20 case minimum for MSPB,
noting that CMS had previously increased the minimum to 125 within the
VM program and that the 20 case minimum did not meet our standard of
0.4 reliability threshold.
Response: We understand the concerns of the commenters. We would
like to reiterate that the proposed adjustments to the MSPB measure
improve its reliability at 20 cases. As stated in the proposed rule,
these changes result in the measure meeting 0.4 reliability for over 88
percent of TINs with at least 20 attributed cases, including solo
practitioners. In MIPS, however, we must assess reliability at the
individual clinician level as well as the TIN level because clinicians
may choose to be assessed as individuals or part of a group in the MIPS
program. Therefore, we reran the reliability analysis for the proposed
MSPB using 2015 data to assess the impact at the TIN/NPI level. Table 6
summarizes the results for different case volumes. This analysis
indicates only 77 percent of individual TIN/NPIs have 0.4 reliability
at a 20 case volume. Therefore, we will increase the minimum case
volume to 35 cases which has a 0.4 reliability threshold for 90 percent
of individual TIN/NPIs and 97 percent of TINs that are attributed.
Table 6--Proposed MSPB Reliability With TIN/NPI Attribution
----------------------------------------------------------------------------------------------------------------
Minimum 20 Minimum 30 Minimum 35
Reliability of revised MSPB measure using TIN/NPI attribution cases (%) cases (%) cases (%)
----------------------------------------------------------------------------------------------------------------
Percent of TIN/NPIs with 0.4 reliability at different minimum 77 86 90
case volume requirements.......................................
Percent of TINs with 0.4 reliability at different minimum case 90 95 97
volume requirements............................................
----------------------------------------------------------------------------------------------------------------
[[Page 77171]]
Comment: Several comments supported the removal of specialty
adjustment from the MSPB measure, noting that in some cases certain
specialties may have higher spending that is not appropriate based on
the condition of the patient. Several other commenters opposed the
removal of the specialty adjustment from the MSPB measure because it
would disadvantage those specialists who care for the sickest patients
and not recognize the differences in the types of patients seen by
different specialties. Some commenters opposed the change in the
calculation of observed to expected ratio at the episode level rather
than the clinician or group level.
Response: The MSPB measure includes not only risk adjustment to
capture the clinical conditions of the patients in the period prior to
the index admission, but also includes risk adjustment that reflects
the clinical presentation based on the index MS-DRG. We believe that
including the index MS-DRG helps to identify a pool of patients either
receiving a procedure or admitted for a particular medical condition
and the HCC risk adjustment helps to adjust for comorbidities which may
suggest that a clinician is treating patients who are sicker than most
within that pool. Since there is less variation in the specialties
caring for a particular type of MS-DRG, adding specialty adjustment
reduces reliability. We will continue to analyze all cost measures to
ensure they include the proper risk adjustment and meet our reliability
threshold.
We are finalizing at Sec. 414.1380(b)(2)(ii) that a MIPS eligible
clinician must meet the minimum case volume specified by CMS to be
scored on a cost measure. Following our consideration of the comments,
we are not finalizing our proposal of a minimum case volume of 20 for
the MSPB measure. Instead, we are finalizing a minimum case volume of
35 for the MSPB. We are also adopting our proposals to not adjust the
MSPB measure by specialty and to calculate observed to expected ratio
at an episode level. We will continue to analyze the measure to ensure
reliability.
(b) Episode-Based Measures Proposed for the MIPS Cost Performance
Category
As noted in the previous section, we proposed to calculate several
episode-based measures for inclusion in the cost performance category.
Groups have received feedback on their performance on episode-based
measures through the Supplemental Quality and Resource Use Report
(sQRUR), which are issued as part of the Physician Feedback Program
under section 1848(n) of the Act; however, these measures have not been
used for payment adjustments through the VM. Several stakeholders
expressed in the MIPS and APMs RFI the desire to transition to episode-
based measures and away from the general total per capita cost measures
used in the VM. Therefore, in lieu of using the total per capita cost
measures for populations with specific conditions that are used for the
VM, we proposed episode-based measures for a variety of conditions and
procedures that are high cost, have high variability in resource use,
or are for high impact conditions. In addition, as these measures are
payment standardized and risk adjusted, we believe they meet the
statutory requirements for appropriate measures of cost as defined in
section 1848(p)(3) of the Act because the methodology eliminates the
effects of geographic adjustments in payment rates and takes into
account risk factors.
We also reiterated that while we transition to using episode-based
measures for payment adjustments, we will continue to engage
stakeholders through the process specified in section 1848(r)(2) of the
Act to refine and improve the episodes moving forward.
As noted earlier, we have provided performance information on
episode-based measures to MIPS eligible clinicians through the sQRURs,
which are released in the fall. The sQRURs provide groups and solo
practitioners with information to evaluate their resource utilization
on conditions and procedures that are costly and prevalent in the
Medicare FFS population. To accomplish this goal, various episodes are
defined and attributed to one or more groups or solo practitioners most
responsible for the patient's care. The episode-based measures include
Medicare Part A and Part B payments for services determined to be
related to the triggering condition or procedure. The payments included
are standardized to remove the effect of differences in geographic
adjustments in payment rates and incentive payment programs and they
are risk adjusted for the clinical condition of beneficiaries. Although
the sQRURs provide detailed information on these care episodes, the
calculations are not used to determine a TIN's VM payment adjustment
and are only used to provide feedback.
We proposed to include in the cost performance category several
clinical condition and treatment episode-based measures that have been
reported in the sQRUR or were included in the list of the episode
groups developed under section 1848(n)(9)(A) of the Act published on
the CMS Web site: https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/MACRA-MIPS-and-APMs.html. The identified episode-based measures
have been tested and previously published. Tables 4 (81 FR 28202-28206)
and 5 (81 FR 28207) of the proposed rule listed the 41 clinical
condition and treatment episode-based measures proposed for the CY 2017
performance period, as well as whether the episodes have previously
been reported in a sQRUR.
While we proposed the measures listed in Tables 4 and 5 of the
proposed rule for the cost performance category, we stated in the
proposed rule that we were uncertain as to how many of these measures
we would ultimately include in the final rule with comment period. As
these measures have never been used for payment purposes, we indicated
that we may choose to specify a subset of these measures in the final
rule with comment period. We requested public comment on which of the
measures listed in Tables 4 and 5 of the proposed rule to include in
the final rule with comment period. In addition to considering public
comments, we intended to consider the number of MIPS eligible
clinicians able to be measured, the episode's impact on Medicare Part A
and Part B spending, and whether the measure has been reported through
sQRUR. In addition, while we do not believe specialty adjustment is
necessary for the episode-based measures, we will continue to explore
this further given the diversity of episodes. We solicited comment on
whether we should specialty adjust the episode-based measures.
The following is summary of the comments we received regarding the
episode-based measures proposed for the cost performance category for
the CY 2017 performance period.
Comment: Several comments supported the inclusion of episode-based
measures because they more closely tracked a clinician's influence on
the care provided than total per-capita cost measures.
Response: Episode-based measures are an important component of the
overall measurement of cost and we are finalizing a subset of episode-
based measures.
Comment: Several commenters supported the eventual inclusion of
episode-based measures in the cost performance category but opposed the
inclusion of these measures in the transition year of MIPS because
clinicians are not familiar with them yet and have not had the
opportunity to receive feedback on them. Commenters
[[Page 77172]]
recommended a more transparent process in the development of episode
groups. Others recommended that only those measures included in the
sQRUR in previous years be included in the transition year of the MIPS
program.
Response: We agree with the commenters. Even though we have reduced
the weight of the cost performance category to 0 percent for the first
MIPS payment year, we believe that clinicians would benefit from more
exposure to these episode-based measures and how they might be scored
before they are included in the MIPS final score. While 14 of the
episode-based measures we proposed were included in the 2014 sQRUR, a
number of them have never been included in the VM or a sQRUR.
Therefore, as discussed below, we are finalizing a subset of the
proposed episode-based measures, which have been included in the sQRUR
for 2014 and meet our reliability threshold of 0.4. We note that we
selected episodes from the 2014 sQRUR because these measures have been
included in 2 years of sQRUR (2014 and 2015) which provides clinicians
an opportunity for initial feedback before the MIPS performance period
begins although the feedback does not contain any scoring information,
nor does it contain the updated attribution changes.
In addition, we intend to provide performance feedback to
clinicians on additional episode-based measures that we are not
finalizing for inclusion in the MIPS cost performance category for the
CY 2017 performance period but may want to consider proposing for
inclusion in the MIPS cost performance category in the future. Section
1848(q)(12)(A)(i) of the Act requires that we provide timely
confidential feedback to MIPS eligible clinicians on their performance
under the cost performance category. While the feedback on these
additional episode-based measures would be for informational purposes
only, we believe it will aid in MIPS eligible clinicians' ability to
understand the measures and the attribution rules and methods that we
use to calculate performance on these measures, which may be helpful in
the event that we decide to propose the measures for the MIPS cost
performance category in future rulemaking.
Comment: Some commenters suggested that 41 episode-based measures
was too many and that a smaller number should be used in the program.
Another commenter suggested that CMS establish a maximum number of
episode-based measures that may be attributed to a particular clinician
or group.
Response: We believe that a large number of episode-based measures
is needed to capture the diversity of clinicians in the MIPS program,
as many clinicians may only have a small number of attributable
episodes. While some large multispecialty groups may have a large
number of episodes attributed, we believe this reflects the diversity
of care that they are providing to patients. However, for the CY 2017
performance period, we are finalizing a reduced set of measures which
are reliable at the group (TIN) and individual (TIN/NPI) level and
where feedback has been previously presented to eligible clinicians or
groups.
As discussed in the preceding response, we also intend to provide
performance feedback to MIPS eligible clinicians under section
1848(q)(12)(A)(i) of the Act on additional episode-based measures for
informational purposes only.
Comment: A commenter suggested that CMS provide technical
assistance to specialty societies and other organizations in order to
develop episode groups for specialty care.
Response: Episode development under section 1848(r) of the Act will
continue. This process includes extensive communication with technical
experts in the field and stakeholders but does not provide for
technical assistance to organizations.
Comment: A commenter opposes the use of episode-based measures for
upper respiratory infection (measure 33) and deep vein thrombosis of
extremity (measure 34) because they are likely to occur in high risk
patients.
Response: For the CY 2017 performance period, we are only
finalizing episode-based measures which have been previously reported
in the 2014 supplemental QRUR and meet our reliability thresholds.
Upper respiratory infection and deep vein thrombosis of extremity were
not included in the 2014 sQRUR, therefore we are not finalizing these
measures for the MIPS CY 2017 performance period. We intend to develop
episode-based measures that cover patients with various levels of risk.
We believe that the advantage of episode-based measures is defining a
certain patient population that will be similar even if everyone is
high risk. In addition, episode-based measures are risk adjusted in the
same fashion as the other cost measures that were proposed to be
included within the program.
Comment: Several commenters suggested development of future
episode-based measures because many clinicians do not have episode-
based measures for patients they treat.
Response: We intend to continue to develop episode-based measures
that cover more procedures and conditions and invite stakeholder
feedback on additional conditions or procedures.
Comment: A commenter expressed concern that ICD-9-CM codes are
insufficient to be used within episode-based measures because they do
not contain enough clinical data to predict costs. Others suggested
that the measures should be updated to use ICD-10-CM codes.
Response: ICD-9-CM was used for diagnosis coding for Medicare
claims until October 1, 2015. Because ICD-9-CM codes were required for
billing for all services, we believe they are the richest source of
clinical data available to allow us to specify and risk adjust episode-
based measures. The transition from ICD-9-CM to ICD-10-CM took place on
October 1, 2015. There are many more diagnosis codes available in ICD-
10-CM than in ICD-9-CM which reflect increased specificity in some
clinical areas. In preparation for the transition to ICD-10-CM, a
crosswalk of diagnosis codes from ICD-9-CM to ICD-10-CM was created and
this was used for the transition of coverage policies and other
documents that include diagnosis codes. We expect to use this crosswalk
as a baseline for our transition work but understand that there may be
changes that need to be made to accommodate the different use of
diagnostic codes with ICD-10-CM.
Comment: Commenter suggests CMS consider episode-based measures for
chronic conditions that do not have an inpatient trigger, so that costs
for chronic conditions can be assessed under the cost performance
category even if an inpatient stay does not occur.
Response: We will continue to work to develop episode-based
measures and our work is not limited to those conditions that include
an inpatient stay.
Comment: Commenter stated that there is difficulty in attributing
an episode-based measure to a clinician providing a diagnostic service.
Response: One feature of episode-based measures is that they allow
for the creation of a list of related services for a particular
condition or procedure. This means that episode-based measures could be
triggered on the basis of a diagnostic service if experts could develop
a list of services that are typically related. Among our ten finalized
episode-based measures is one triggered on the basis of colonoscopy,
which is a diagnostic service.
Comment: A commenter indicated that future development of episode-
based measures should not be limited to
[[Page 77173]]
Methods A and B as described in the rule.
Response: We generally believe that a consistent approach to cost
measure development is easier to understand and fair to all clinicians.
However, we recognize that cost measure development is ongoing and will
continue to investigate methods to best capture the contributions of
individual clinicians and groups to cost and will consider other
methods if they are necessary.
Comment: Several commenters expressed concern with particular
elements of the technical specifications of certain episode-based
measures. One commenter requested that pneumatic compression devices be
added as a relevant service to the VTE episode-based measure, that
patient-activated event recorders be removed from the list of relevant
services from the heart failure (chronic) episode-based measure, that
AV node ablation be removed from the list of relevant services from
Atrial Fibrillation/Flutter Chronic episode-based measure along with
other recommendations.
Response: As we mentioned, we want to use episode-based measures
that meet our reliability threshold and for which we have provided
feedback through the 2014 sQRUR. We invite continued feedback on the
episode-based measures as they are created and refined through the
process outlined in section 1848(r) of the Act. However, we are not
modifying the specifications for any of the episodes that we are
finalizing in this rule.
Comment: A commenter recommended that that the osteoporosis and
rheumatoid arthritis episode-based measures should not be included in
cost measurement in the transition year because the episode-based
measures have not been thoroughly vetted.
Response: Although all episode-based measures were created with
clinical input, the measures identified by the commenters were not
included in the 2014 sQRUR, so individual clinicians may be unfamiliar
with them before the MIPS performance period. Therefore, we are not
finalizing these episode-based measures for the CY 2017 performance
period.
Comment: A commenter expressed concern with the use of HCC scores
to risk adjust episode-based measures because HCC scores have been
shown to under-predict costs for high cost patients or for patients in
rural areas.
Response: We are unaware of other risk adjustment methodologies
that are more appropriate than HCC for Medicare beneficiaries. We will
continue to conduct analyses to ensure that risk adjustment is as
precise as possible to ensure that clinicians are not inappropriately
disadvantaged because of the use of this risk adjustment methodology.
Comment: A commenter supported the use of procedure codes to
trigger the episode-based measure for cataract surgery as opposed to
the licensure status of the physician. Another commenter expressed
concern with the episode-based measure for cataract surgery because it
did not reflect previous discussions with CMS regarding this episode-
based measure.
Response: We will continue to work to improve the specifications of
the episode-based measures. We are finalizing the episode-based measure
for Lens and Cataract Procedures because it meets our reliability
threshold and was included in the 2014 sQRUR. We offered stakeholders
the opportunity to review measure specifications for all of the
episode-based measures under development in a posting in February 2016
and invite continued feedback on the specifications going forward.
Comment: A commenter recommended that CMS provide more guidance on
the implications of billing for a trigger code for the lens and
cataract episode-based measure and including a modifier for
preoperative management only (modifier 56) or postoperative management
only (modifier 55).
Response: Clinicians who bill for services with modifiers that
indicate that they did not actually perform the index procedure will
not be attributed for the costs associated with that episode.
We appreciate the enthusiasm expressed by many commenters for the
development of episode-based measures and their more nuanced focus on
particular types of care. We also understand the concerns expressed
regarding lack of familiarity with the episode-based measures. For this
reason, we are modifying our proposal and finalizing for the CY 2017
performance period only 10 episode-based measures from the proposed
rule. All of these measures were included in the 2014 sQRUR and meet
the reliability threshold of 0.4 for the majority of clinicians and
groups at a case minimum of 20. Table 7 includes the episode-based
measures that are finalized for the CY 2017 performance period and
includes their reliability, which we calculated using data from the
2015 sQRUR when the measure is attributed at the TIN level, as in the
VM, and when attributed at the TIN/NPI level, as we will do under the
MIPS program. The measures listed in Table 7 will be used (along with
the total per capita cost measure and the MSPB measure finalized in
this rule) to determine the cost performance category score. As we
noted earlier, the weight of the cost category is 0 percent for 2019
MIPS payment year, therefore the performance category score will
provide information to MIPS eligible clinicians, but performance will
not affect the final score for the 2019 MIPS payment year.
Table 7--Episode-Based Measures Finalized for the CY 2017 Performance Period
----------------------------------------------------------------------------------------------------------------
Method type/ measure number from % TINs % TIN/NPIs
Table 4 (Method A) and Table 5 Episode name and Included in 2014 meeting 0.4 meeting 0.4
(Method B) from proposed rule description sQRUR reliability reliability
* threshold threshold
----------------------------------------------------------------------------------------------------------------
A/1............................. Mastectomy (formerly Yes............... 99.6 100.0
titled ``Mastectomy for
Breast Cancer'')--
Mastectomy is triggered
by a patient's claim with
any of the interventions
assigned as Mastectomy
trigger codes. Mastectomy
can triggered by either
an ICD procedure code, or
CPT codes in any setting
(e.g. hospital, surgical
center).
A/5............................. Aortic/Mitral Valve Yes............... 93.9 92.0
Surgery--Open heart valve
surgery (Valve) episode
is triggered by a patient
claim with any of Valve
trigger codes.
[[Page 77174]]
A/8............................. Coronary Artery Bypass Yes............... 96.9 94.8
Graft (CABG)--Coronary
Artery Bypass Grafting
(CABG) episode is
triggered by an inpatient
hospital claim with any
of CABG trigger codes for
coronary bypass. CABG
generally is limited to
facilities with a Cardiac
Care Unit (CCU); hence
there are no episodes or
comparisons in other
settings.
A/24............................ Hip/Femur Fracture or Yes............... 88.9 76.1
Dislocation Treatment,
Inpatient (IP)-Based--
Fracture/dislocation of
hip/femur (HipFxTx)
episode is triggered by a
patient claim with any of
the interventions
assigned as HipFxTx
trigger codes. HipFxTx
can be triggered by
either an ICD procedure
code or CPT codes in any
setting.
B/1............................. Cholecystectomy and Common Yes............... 89.6 81.8
Duct Exploration--
Episodes are triggered by
the presence of a trigger
CPT/HCPCS code on a claim
when the code is the
highest cost service for
a patient on a given day.
Medical condition
episodes are triggered by
IP stays with specified
MS-DRGs.
B/2............................. Colonoscopy and Biopsy-- Yes............... 100.0 99.9
Episodes are triggered by
the presence of a trigger
CPT/HCPCS code on a claim
when the code is the
highest cost service for
a patient on a given day.
Medical condition
episodes are triggered by
IP stays with specified
MS-DRGs.
B/3............................. Transurethral Resection of Yes............... 95.2 95.5
the Prostate (TURP) for
Benign Prostatic
Hyperplasia--For
procedural episodes,
treatment services are
defined as the services
attributable to the MIPS
eligible clinician or
group managing the
patient's care for the
episode's health
condition.
B/5............................. Lens and Cataract Yes............... 99.7 99.5
Procedures--Procedural
episodes are triggered by
the presence of a trigger
CPT/HCPCS code on a claim
when the code is the
highest cost service for
a patient on a given day.
B/6............................. Hip Replacement or Repair-- Yes............... 97.8 97.7
Procedural episodes are
triggered by the presence
of a trigger CPT/HCPCS
code on a claim when the
code is the highest cost
service for a patient on
a given day.
B/7............................. Knee Arthroplasty Yes............... 99.9 99.8
(Replacement)--Procedural
episodes are triggered by
the presence of a trigger
CPT/HCPCS code on a claim
when the code is the
highest cost service for
a patient on a given day.
----------------------------------------------------------------------------------------------------------------
* Table 4 of the proposed rule is located on 81 FR 28202-28206; Table 5 of the proposed rule is located at 81 FR
28207.
In addition, for informational purposes, we intend to provide
feedback to MIPS eligible clinicians under section 1848(q)(12)(A)(i) of
the Act on the additional episode-based measures which may be
introduced into MIPS in future years. We believe it will aid in MIPS
eligible clinicians' ability to understand the measures and the
attribution rules and methods that we use to calculate performance on
these measures, which may be helpful in the event that we decide to
propose the measures for the MIPS cost performance category in future
rulemaking.
(i) Attribution
For the episode-based measures listed in Tables 4 and 5 of the
proposed rule (81 FR 28202), we proposed to use the attribution logic
used in the 2014 sQRUR (full description available at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Detailed-Methods-2014SupplementalQRURs.pdf), with modifications to adjust for whether
performance is being assessed at an individual or group level. Please
refer to 81 FR 28208 of the proposed rule for our proposals to address
attribution differences for individuals and groups. For purposes of
this section, we will use the general term MIPS eligible clinicians to
indicate attribution for individuals or groups.
Acute condition episode-based measures would be attributed to all
MIPS eligible clinicians that bill at least 30 percent of inpatient
evaluation and management (IP E&M) visits during the initial treatment,
or ``trigger event,'' that opened the episode. E&M visits during the
episode's trigger event represent services directly related to the
management of the beneficiary's acute condition episode. MIPS eligible
clinicians that bill at least 30 percent of IP E&M visits are therefore
likely to have been responsible for the oversight of care for the
beneficiary during the episode. It is possible for more than one MIPS
eligible clinician to be attributed a single episode using this rule.
If an acute condition episode has no IP E&M claims during the episode,
then that episode is not attributed to any MIPS eligible clinician.
Procedural episodes would be attributed to all MIPS eligible
clinicians that bill a Medicare Part B claim with a trigger code during
the trigger event of the episode. For inpatient procedural episodes,
the trigger event is defined as the IP stay that triggered the episode
plus the day before the admission to the IP hospital. For outpatient
procedural episodes constructed using Method A, the trigger event is
defined as the day of the triggering claim plus the day before and 2
days after the trigger date. For outpatient procedural episodes
constructed using Method B, the trigger event is defined as only the
day of the triggering claim. Any Medicare Part B claim or line during
the trigger event with the episode's triggering procedure code is used
for attribution. If more than one MIPS eligible clinician bills a
triggering claim during the trigger event, the episode is attributed to
each of the MIPS eligible clinicians. If co-surgeons bill the
triggering claim, the episode is attributed to each MIPS eligible
[[Page 77175]]
clinician. If only an assistant surgeon bills the triggering claim, the
episode is attributed to the assistant surgeon or group. If an episode
does not have a concurrent Medicare Part B claim with a trigger code
for the episode, then that episode is not attributed to any MIPS
eligible clinician.
The following is a summary of the comments we received regarding
our attribution methodology for the episode-based measures:
Comment: A commenter suggested that episodes be attributed to the
clinician with the highest Part B charges.
Response: The episode-based measures each have different
attribution methodologies. We believe that always attributing episodes
to the clinician with the highest Part B charges is not necessarily
appropriate in all cases, particularly in cases in which a procedure
may trigger the beginning of an episode.
Comment: A commenter suggested that until the patient relationship
codes are developed, clinicians should be allowed to select the cost
measures that apply to them.
Response: We believe that the cost measures that are included in
this final rule with comment period are constructed in such a way to
ensure that clinicians or groups are measured for cost for the patients
for which they provide care. For example, a clinician or group would be
required to provide 20 coronary artery bypass grafts to be attributed
an episode-based measure for that procedure. We believe that requiring
a cardiothoracic surgeon or group to select this cost measure through
some kind of administrative mechanism would not add value to the
program and could potentially increase administrative burden for the
clinician.
Comment: A commenter suggested that CMS employ Method B, which
examines episodes independently, rather than Method A, in which cost is
assigned to episodes on the basis of hierarchical rules, in developing
episode-based measures for podiatrists.
Response: We continue to work on the development of episode groups
and are evaluating the use of Method A and Method B within that context
for a variety of medical conditions and procedures. Episode-based
measures using both methods are included in this final rule with
comment period.
Comment: A commenter expressed concern that certain specialties
such as hospital-based physicians and palliative care physicians will
have a large number of episode-based measures attributed to them.
Response: We believe that the episode-based measures represent a
wide variety of procedural and medical episodes. For the transition
year, we have limited the number of episode-based measures and reduced
the weight of the cost performance category but recognize that some
clinicians may have more attributed episode-based measures than others
based on the nature of the patients that they treat. However, it is
important to note that being attributed additional cost measures does
not change the weight of the cost performance category in the final
score, which is set at 0 percent for the 2019 MIPS payment year. In
addition, having more attributed episode-based measures does not
inherently disadvantage a clinician, particularly if the episodes are
lower in cost compared to the cost for similar episodes with similarly
complex patients. We intend to continue to develop episode-based
measures to ensure that all specialties of medicine may be measured on
cost in a similar fashion.
Following our consideration of the comments, we will finalize the
attribution methodology for episode-based measures as proposed.
(ii) Reliability
To ensure moderate reliability, we proposed at Sec.
414.1380(b)(2)(ii) to use the minimum of 20 cases for all episode-based
measures listed in Tables 4 and 5 of the proposed rule (81 FR 28386).
We proposed to not include any measures that do not have average
moderate reliability (at least 0.4) at 20 episodes.
Comment: Several commenters opposed the inclusion of episode-based
measures with a reliability of 0.4 at a 20 minimum case size and
recommended that only measures with a 0.7 reliability at a 20 minimum
case size be included.
Response: We believe that episode-based measures with a reliability
of 0.4 with a minimum attributed case size of 20 meet the standards for
being included as cost measures within the MIPS program. We aim to
measure cost for as many clinicians as possible and limiting episode-
based measures to reliability of 0.7 or 0.8 at a minimum case size of
20 would result in few individual clinicians being attributed enough
patients under these measures, particularly since the episode-based
measures represent only a subset of patients seen by an individual
clinician or group.
Please see section II.E.5.e.(3)(b) for additional discussion of
using 0.4 as the reliability threshold. All of the episode-based
measures that we are finalizing are reliable at this threshold for 20
cases at both the individual and group level. We are finalizing at
Sec. 414.1380(b)(2)(ii) that a MIPS eligible clinician must meet the
minimum case volume specified by CMS to be scored on a cost measure.
After considering the comments, we are finalizing our proposal that a
MIPS eligible clinician must have a minimum of 20 cases to be scored on
an episode-based measure.
(c) Attribution for Individual and Groups
In the VM and sQRUR, all cost measurement was attributed at the
solo practitioner and group level, as identified by the TIN. In MIPS,
however, we proposed to evaluate performance at the individual and
group levels. For MIPS eligible clinicians whose performance is being
assessed individually across the other MIPS performance categories, we
proposed to attribute cost measures using the TIN/NPI rather than the
TIN. Attribution at the TIN/NPI level allows individual MIPS eligible
clinicians, as identified by their TIN/NPI, to be measured based on
cases that are specific to their practices, rather than being measured
on all the cases attributed to the group TIN. For MIPS eligible
clinicians that choose to have their performance assessed as a group
across the other MIPS performance categories, we proposed to attribute
cost measures at the TIN level (the group TIN under which they report).
The logic for attribution would be similar whether attributing to the
TIN/NPI level or the TIN level. As an alternative proposal, we
solicited comment on whether MIPS eligible clinicians that choose to
have their performance assessed as a group should first be attributed
at the individual TIN/NPI level and then have all cases assigned to the
individual TIN/NPIs attributed to the group under which they bill. This
alternative would apply one consistent methodology to both groups and
individuals, compared to having a methodology that assigns cases using
TIN/NPI for assessment at the individual level and another that assigns
cases using only TIN for assessment at the group level. For example,
the general attribution logic for the MSPB is to assign the MSPB
measure based on the plurality of claims (as measured by allowed
charges) for Medicare Part B services rendered during an inpatient
hospitalization that is an index admission for the MSPB measure. Our
proposed approach would determine ``plurality of claims'' separately
for individuals and groups. For individuals, we would assign the MSPB
measure using the ``plurality of claims'' by TIN/NPI, but for groups we
would determine the ``plurality of
[[Page 77176]]
claims'' by TIN. The alternative proposal, in contrast, would determine
the ``plurality of claims'' by TIN/NPI for both groups and individuals.
However, for individuals, only the MSPB measure attributed to the TIN/
NPI would be evaluated, while for groups the MSPB measure attributed to
any TIN/NPI billing under the TIN would be evaluated.
We requested comment on this proposal and alternative considered.
Comment: A commenter supported the proposal to attribute cost
measures at the TIN level for groups that select to be assessed on
other MIPS performance categories as a group.
Response: We believe both attribution methodologies are valid, but
as described below, we are finalizing the alternative proposal.
Comment: Several commenters supported the alternative proposal of
attributing cost for all clinicians at the TIN/NPI level, regardless of
whether they participate in MIPS as a group or as individual
clinicians.
Response: We believe having a consistent attribution methodology
for individual and group reporting would be beneficial and simpler for
clinicians to understand. Therefore, we are finalizing the alternative
proposal.
To reduce complexity in the MIPS program, we are finalizing the
alternative proposal to attribute cost measures for all clinicians at
the TIN/NPI level. For those groups that participate in group reporting
in other MIPS performance categories, their cost performance category
scores will be determined by aggregating the scores of the individual
clinicians within the TIN. For example, if a TIN had one surgeon that
billed for 11 codes and another surgeon in that TIN billed for 12 codes
that would trigger the knee arthroplasty episode-based measure, neither
surgeon would have enough cases to be measured individually. However,
if the TIN elects group reporting, the TIN would be assessed on the 23
combined cases.
(d) Application of Measures to Non-Patient Facing MIPS Eligible
Clinicians
Section 101(c) of the MACRA added section 1848(q)(2)(C)(iv) to the
Act, which requires the Secretary to give consideration to the
circumstances of professional types who typically furnish services
without patient facing interaction (non-patient facing) when
determining the application of measures and activities. In addition,
this section allows the Secretary to apply alternative measures or
activities to non-patient facing MIPS eligible clinicians that fulfill
the goals of a performance category. Section 101(c) of the MACRA also
added section 1848(q)(5)(F) to the Act, which allows the Secretary to
re-weight MIPS performance categories if there are not sufficient
measures and activities applicable and available to each type of MIPS
eligible clinician involved.
For the 2017 MIPS performance period, we did not propose any
alternative measures for non-patient facing MIPS eligible clinicians or
groups. This means that non-patient facing MIPS eligible clinicians or
groups may not be attributed any cost measures that are generally
attributed to clinicians who have patient facing encounters with
patients. We therefore anticipate that, similar to MIPS eligible
clinicians or groups that do not meet the required case minimum for any
cost measures, many non-patient facing MIPS eligible clinicians may not
have sufficient measures and activities available to report and would
not be scored on the cost performance category under MIPS. We refer
readers to section II.E.6.b.2. of this final rule with comment period
where we discussed how we would address performance category weighting
for MIPS eligible clinicians or groups who do not receive a performance
category score for a given performance category. We also intend to work
with non-patient facing MIPS eligible clinicians and specialty
societies to propose alternative cost measures for non-patient facing
MIPS eligible clinicians and groups under MIPS in future years. Lastly,
we solicited comment on how best to incorporate appropriate alternative
cost measures for all MIPS eligible clinician types, including non-
patient facing MIPS eligible clinicians.
The following is summary of the comments we received.
Comment: Many commenters supported a policy to not attribute cost
measures to those clinicians and groups that meet the requirements of
non-patient facing MIPS eligible clinicians because these clinicians
would have little influence on cost, particularly with regard to the
measures that were proposed for the transition year of the program.
Response: We did not propose to preclude non-patient facing MIPS
eligible clinicians from receiving a score for the cost performance
category. Rather, based on the cost measures that we proposed for the
CY 2017 performance period, we did not anticipate many non-patient
facing MIPS eligible clinicians would have sufficient case volume as
the measures are generally attributed to clinicians who have patient-
facing encounters. If non-patient facing MIPS eligible clinicians do in
fact have sufficient case volume, however, they would be attributed
measures in accordance with the attribution methodology and would
receive a score for the cost performance category.
Comment: Many commenters recommended that CMS work to develop
alternative cost measures that could be used for non-patient facing
clinicians or groups in the future.
Response: We will continue to investigate all methods to measure
cost, including methods for those clinicians who provide services that
are not included in the existing cost measure attribution criteria.
We appreciate the comments received and will attribute cost
measures to non-patient facing MIPS eligible clinicians who have
sufficient case volume, in accordance with the attribution methodology.
(e) Additional System Measures
Section 1848(q)(2)(C)(ii) of the Act, as added by section 101(c) of
MACRA provides that the Secretary may use measures used for a payment
system other than for physicians, such as measures for inpatient
hospitals, for purposes of the quality and cost performance categories
of MIPS. The Secretary, however, may not use measures for hospital
outpatient departments, except in the case of items and services
furnished by emergency physicians, radiologists, and anesthesiologists.
We intend to align any facility-based MIPS measure decision across
the quality and cost performance categories to ensure consistent
policies for MIPS in future years. We refer readers back to section
II.E.5.b.(5) of this rule which discusses our strategy and solicits
comments related to this provision. Below is our response to comments
related to measuring the cost of facility-based clinicians.
Comment: Some commenters supported the consideration of inpatient
hospital cost measures for MIPS but requested that CMS create a
methodology with an appropriate attribution methodology that could
account for clinicians practicing in multiple facilities. Some
commenters supported the inclusion of inpatient hospital cost measures
as an option for certain clinicians and others opposed their inclusion
in MIPS.
Response: We will take these comments into consideration if we
propose system measures in future rulemaking.
Comment: Many commenters expressed concern that the total per
[[Page 77177]]
capita cost measure, MSPB, and episode-based measures would not capture
cost associated with their particular specialty or field of medicine,
such as anesthesiology. Commenters encouraged CMS to develop measures
that would capture cost covering the unique contributions of all
specialties.
Response: We will continue to develop more episode-based measures
and other mechanisms of measuring cost that will cover a broader group
of medical specialists in the coming years and will plan to work with
stakeholders to identify gaps in cost measurement.
We appreciate the comments and will take all comments into
consideration as we develop future cost measures.
(4) Future Modifications to Cost Performance Category
In the future, we intend to consider how best to incorporate
Medicare Part D costs into the cost performance category, as described
in section 1848(q)(2)(B)(ii) of the Act. We solicited public comments
on how we should incorporate those costs under MIPS for future years.
We also intend to continue developing and refining episode-based
measures for purposes of cost performance category measure
calculations.
The following is summary of the comments we received regarding the
inclusion of Medicare Part D costs within cost measurement.
Comment: Several commenters expressed support the inclusion of Part
D costs in future cost measures, some citing the contribution of
prescribing behavior to overall health costs and that including costs
from other categories without including oral prescription drugs
presented an incomplete picture.
Response: To the extent possible, we will investigate ways to
account for the cost of drugs under Medicare Part D in the cost
measures in the future, as feasible and applicable, in accordance with
section 1848(q)(2)(B)(ii) of the Act.
Comment: Several commenters opposed the inclusion of Part D drug
costs in future cost measures, noting that certain physicians prescribe
more expensive drugs than others and that there are technical
challenges to price standardizing Part D data and others questioned the
appropriateness of the data. Others commented that including Part D
costs could create improper incentives to prescribe services based on
the part of Medicare that covers the service.
Response: Drugs covered under Medicare Part D are a growing
component of the overall costs for Medicare beneficiaries and one in
which clinicians have a significant influence. However, not all
patients covered by Medicare A and B are covered under a Medicare Part
D plan, which presents a technical challenge in assessing the cost of
drugs for all patients. In addition, Medicare Part D is provided
through private plans which independently negotiate payment rates for
certain drugs or drugs within a particular class. We will continue to
investigate methods to incorporate this important component of
healthcare spending into our cost measures in the future.
Comment: Several commenters suggested removing the costs associated
with drugs covered under Medicare Part B from cost in addition to those
covered under Medicare Part D.
Response: We believe that clinicians play a key role in prescribing
drugs for their patients and that the costs associated with drugs can
be a significant contributor to the overall cost of caring for a
patient. We do not believe it would be appropriate to remove the cost
of Medicare Part B drugs from the cost measures.
We appreciate the comments and will take all comments into
consideration as we develop future cost measures.
f. Improvement Activities Performance Category
(1) Background
(a) General Overview and Strategy
The improvement activities performance category focuses on one of
our MIPS strategic goals, to use a patient-centered approach to program
development that leads to better, smarter, and healthier care. We
believe improving the health of all Americans can be accomplished by
developing incentives and policies that drive improved patient health
outcomes. Improvement activities emphasize activities that have a
proven association with better health outcomes. The improvement
activities performance category also focuses on another MIPS strategic
goal which is to use design incentives that drive movement toward
delivery system reform principles and participation in APMs. A further
MIPS strategic goal we are striving to achieve is to establish policies
that can be scaled in future years as the bar for improvement rises.
Under the improvement activities performance category, we proposed
baseline requirements that will continue to have more stringent
requirements in future years, and lay the groundwork for expansion
towards continuous improvement over time.
(b) The MACRA Requirements
Section 1848(q)(2)(C)(v)(III) of the Act defines an improvement
activity as an activity that relevant eligible clinician organizations
and other relevant stakeholders identify as improving clinical practice
or care delivery, and that the Secretary determines, when effectively
executed, is likely to result in improved outcomes. Section
1848(q)(2)(B)(iii) of the Act requires the Secretary to specify
improvement activities under subcategories for the performance period,
which must include at least the subcategories specified in section
1848(q)(2)(B)(iii)(I) through (VI) of the Act, and in doing so to give
consideration to the circumstances of small practices, and practices
located in rural areas and geographic health professional shortage
areas (HPSAs).
Section 1848(q)(2)(C)(iv) of the Act generally requires the
Secretary to give consideration to the circumstances of non-patient
facing MIPS eligible clinicians or groups and allows the Secretary, to
the extent feasible and appropriate, to apply alternative measures and
activities to such MIPS eligible clinicians and groups.
Section 1848(q)(2)(C)(v) of the Act required the Secretary to use a
request for information (RFI) to solicit recommendations from
stakeholders to identify improvement activities and specify criteria
for such improvement activities, and provides that the Secretary may
contract with entities to assist in identifying activities, specifying
criteria for the activities, and determining whether MIPS eligible
clinicians or groups meet the criteria set. In the MIPS and APMs RFI,
we requested recommendations to identify activities and specify
criteria for activities. In addition, we requested details on how data
should be submitted, the number of activities, how performance should
be measured, and what considerations should be made for small or rural
practices. There were two overarching themes from the comments that we
received in the MIPS and APMs RFI. First, the majority of the comments
indicated that all subcategories should be weighted equally and that
MIPS eligible clinicians or groups should be allowed to select from
whichever subcategories are most applicable to them during the
performance period. Second, commenters supported inclusion of a diverse
set of activities that are meaningful for individual MIPS eligible
clinicians or groups. We have reviewed all of the comments that we
received and took these recommendations into consideration
[[Page 77178]]
while developing the proposed improvement activities policies.
We are finalizing at Sec. 414.1305 the definition of improvement
activities, as proposed, to mean an activity that relevant MIPS
eligible clinician, organizations and other relevant stakeholders
identify as improving clinical practice or care delivery and that the
Secretary determines, when effectively executed, is likely to result in
improved outcomes.
(2) Contribution to Final Score
Section 1848(q)(5)(E)(i)(III) of the Act specifies that the
improvement activities performance category will account for 15 percent
of the final score, subject to the Secretary's authority to assign
different scoring weights under section 1848(q)(5)(F) of the Act.
Therefore, we proposed at Sec. 414.1355, that the improvement
activities performance category would account for 15 percent of the
final score.
Section 1848(q)(5)(C)(i) of the Act specifies that a MIPS eligible
clinician or group that is certified as a patient-centered medical home
or comparable specialty practice, as determined by the Secretary, must
be given the highest potential score for the improvement activities
performance category for the performance period. For a further
description of APMs that have a certified patient centered-medical home
designation, we refer readers to the proposed rule (81 FR 28234).
A patient-centered medical home would be recognized if it is a
nationally recognized accredited patient-centered medical home, a
Medicaid Medical Home Model, or a Medical Home Model. The NCQA Patient-
Centered Specialty Recognition would also be recognized, which
qualifies as a comparable specialty practice. Nationally recognized
accredited patient-centered medical homes are recognized if they are
accredited by: (1) The Accreditation Association for Ambulatory Health
Care; (2) the National Committee for Quality Assurance (NCQA) patient-
centered medical home recognition; (3) The Joint Commission
Designation; or (4) the Utilization Review Accreditation Commission
(URAC).\18\ We refer readers to the proposed rule (81 FR 28330) for
further description of the Medicaid Medical Home Model or Medical Home
Model. The criteria for being an organization that accredits medical
homes is that the organization must be national in scope and must have
evidence of being used by a large number of medical organizations as
the model for their patient-centered medical home. We solicited comment
on our proposal for determining which practices would qualify as
patient-centered medical homes. We also note that practices may receive
a patient-centered medical home designation at a practice level, and
that individual TINs may be composed of both undesignated practices and
practices that have received a designation as a patient-centered
medical home (for example, only one practice site has received patient-
centered medical home designation in a TIN that includes five practice
sites). For MIPS eligible clinicians who choose to report at the group
level, reporting is required at the TIN level. We solicited comment on
how to provide credit for patient-centered medical home designations in
the calculation of the improvement activities performance category
score for groups when the designation only applies to a portion of the
TIN (for example, to only one practice site in a TIN that is comprised
of five practice sites).
---------------------------------------------------------------------------
\18\ Gans, D. (2014). A Comparison of the National Patient-
Centered Medical Home Accreditation and Recognition Programs.
Medical Group Management Association, www.mgma.com.
---------------------------------------------------------------------------
Section 1848(q)(5)(C)(ii) of the Act provides that MIPS eligible
clinicians or groups who are participating in an APM (as defined in
section 1833(z)(3)(C) of the Act) for a performance period must earn at
least one half of the highest potential score for the improvement
activities performance category for the performance period. For further
description of improvement activities and the APM scoring standard for
MIPS, we refer readers to the proposed rule (81 FR 28234). For all
other MIPS eligible clinicians or groups, we refer readers to the
scoring requirements for MIPS eligible clinicians and groups in the
proposed rule (81 FR 28247).
Section 1848(q)(5)(C)(iii) of the Act provides that a MIPS eligible
clinician or group must not be required to perform activities in each
improvement activities subcategory or participate in an APM to achieve
the highest potential score for the improvement activities performance
category.
Section 1848(q)(5)(B)(i) of the Act requires the Secretary to treat
a MIPS eligible clinician or group that fails to report on an
applicable measure or activity that is required to be reported, they
will receive the lowest potential score applicable to the measure or
activity.
The following is a summary of the comments we received regarding
the improvement activities performance category contribution to the
final score.
Comment: Several commenters expressed concern about the burden of
complying with this performance category in addition to the other three
performance categories and some recommended that the performance
category not be included in the MIPS program, believing it would be
difficult to report. Some commenters requested that we remove the
improvement activities performance category completely.
Response: We recognize that there are challenges associated with
understanding how to comply with a new program such as MIPS and the
improvement activities performance category. However, the statute
requires the improvement activities performance category be included in
the Quality Payment Program. After consideration of the comments
expressing concern about reporting burden, we are reducing the number
of required activities we proposed from a maximum of six medium-
weighted or three high-weighted or some combination thereof for full
credit to a requirement of no more than four medium-weighted
activities, two high-weighted activities, or a combination of medium
and high-weighted activities where each selected high-weighted activity
reduces the number of medium-weighted activities required. We believe
this is still aligned with the statute in measuring performance in this
performance category. We will continue to provide education and
outreach to provide further clarity.
Comment: Some commenters expressed concern that improvement
activities would not be successfully implemented because of the low
percentage that this category was given in the final MIPS scoring
methodology. The commenters suggested increasing the improvement
activities performance categories percentage toward the final score.
Another commenter recommended reducing the quality performance
category's weighting from 50 percent to 35 percent and increasing the
improvement activities performance category from 15 percent to 30
percent for 2017, indicating this would increase the likelihood that
more MIPS eligible clinicians would fully participate.
Response: We believe we have appropriately weighted the improvement
activities performance category within the final score, particularly
given the statutory direction under section 1848(q)(5)(E)(i)(III) of
the Act that the category account for 15 percent of the final score,
subject to the Secretary's authority to assign different scoring
weights under certain circumstances. However, we intend to
[[Page 77179]]
monitor the effects of category weighting under MIPS over time.
Comment: Several commenters requested that CMS develop a definition
of a Medical Home or certified patient-centered medical home that
includes practices that are designated by private health plans such as
Blue Cross and Blue Shield of Michigan (BCBSM) patient-centered medical
home program. Some commenters also requested including regional
patient-centered medical home recognition programs that are free to
practices. Other commenters requested that CMS consider MIPS eligible
clinicians or groups that have completed a certification program that
has a demonstrated track record of support by non-Medicare payers,
state Medicaid programs, employers, or others in a region or state.
Some commenters requested that CMS consider other significant rigorous
certification programs or state-level certification. One example of a
state-level certification program, provided by a commenter, was the
Oregon patient-centered medical home certification. One commenter
suggested recognizing certified patient-centered medical homes that may
not have sought national certification. The same commenter also
suggested providing a MIPS eligible clinician or group full credit as a
certified patient-centered medical home if they were performing the
advanced primary care functions reflected in the Joint Principles of
the Patient-Centered Medical Home and the five key functions of the
Comprehensive Primary Care Initiative. One commenter suggested that any
MIPS eligible clinician or group that has received a certification from
any entity that meets the necessary criteria as a patient-centered
medical home accreditor should receive full credit. One commenter
requested that ``The Compliance Team'', a privately held, for-profit,
healthcare accreditation organization that receives deeming authority
from the CMS as an accreditation organization, be included as part of
the accreditation organizations for patient-centered medical home. This
commenter also stated that the exclusion of ``The Compliance Team''
from the final list of approved administering organizations would
create artificial barriers to entry that will likely drive up the cost
of accreditation because all the small practices and clinics that
already went through accreditation with The Compliance Team would need
to go through a second accreditation. One commenter requested that
Behavioral Health Home Certification also be recognized for full credit
as a patient-centered medical home. Some commenters further stated that
CMS should ensure that the activities and standards included in such
accredited programs are meaningful, incorporate private sector best
practices, and directly improve patient outcomes. Other commenters
agreed with using the accreditation programs that were proposed in the
rule to qualify patient-centered medical home models under the
improvement activities performance category for full credit, including
recommending that practices undergo regular re-accreditation by the
proposed bodies to ensure they are continuing to provide care in a
manner consistent with being a medical home. In addition, some
commenters recommended the Quality Payment Program develop a way to
reward practices that may not have reached patient-centered medical
home recognition but are in the process of transformation.
Response: We were not previously aware of additional certifying
bodies that are used by a large number of medical organizations that
adhere to similar national guidelines for certifying a patient-centered
medical home, meaning they are national in scope, as the ones cited in
the proposal. Consistent with the credit provided for practices that
have been certified as a patient-centered medical home or comparable
specialty practice for certified bodies included in the proposal, we
will also recognize practices that have received accreditation or
certification from other certifying bodies that have certified a large
number of medical organization and meet national guidelines. We further
define large as certifying bodies that the certifying organizations
must have certified 500 or more certified member practices. In addition
to the 500 or more practice threshold for certifying bodies, the second
criterion requires a practice to: (1) Have a personal clinician in a
team-based practice; (2) have a whole-person orientation; (3) provide
coordination or integrated care; (4) focus on quality and safety; and
(5) provide enhanced access (Gans, 2014). The Oregon Patient-centered
Primary Care Home Program described by comments and the Blue Cross Blue
Shield of Michigan (BCBSM) are two examples of programs that would meet
these two criteria in the proposed rule.
While we believe that some of the advanced primary care functions
in the Joint Principles of the Patient-Centered Medical Home and key
functions of the Comprehensive Primary Care Initiative might count as
improvement activities there is a distinction maintained between being
an actual certified patient-centered medical home per the statute and
performing some functions of one. Therefore, performing these functions
alone would not qualify for full credit. Other certifications that are
not for patient-centered medical homes or comparable specialty
practices would also not qualify automatically for the highest score.
MIPS eligible clinicians and groups that receive certification from
other accreditation organizations that certify for a patient-centered
medical home or comparable specialty practice, including accredited
organizations that receive deeming authority from CMS, such as The
Compliance Team, would receive full credit as long as those accredited
bodies meet the two criteria. These two criteria are: (1) The
accredited body must have certified 500 or more member practices as a
patient-centered medical home or comparable practice; and (2) they must
meet national guidelines.
Comment: Some commenters agreed with CMS regarding not requiring
that a MIPS eligible clinician select from any specific subcategories
of activities. However, the commenters opposed CMS' suggestion to
eventually calculate performance in this performance category due to
the technical complexity of doing so, but also because it would ignore
the overall intent of the performance category, which is to recognize
engagement in innovative activities that contribute to quality rather
than actual performance. One commenter encouraged CMS to re-consider
the improvement activities and scoring criteria in future years to
incentivize physician improvement.
Response: We will take this suggestion into account as we continue
implementation and refinement of the MIPS program in the future. While
we recognize that it may be technically complex at this time to
calculate performance within the improvement activities performance
category, our expectation is that such a process would become simpler
over time as MIPS eligible clinicians become accustomed to implementing
improvement activities. For further discussion of improvement
activities scoring as a component of the final score, we refer readers
to section II.E.6.a.(4) in this final rule with comment period.
After consideration of the comments regarding the contribution to
final score we are finalizing at Sec. 414.1355, that the improvement
activities performance category would account for 15 percent of the
final score. We are not finalizing our policy on recognizing only
practices that have received nationally recognized accredited or
certified-patient centered
[[Page 77180]]
medical home certifications. Rather, we are finalizing at Sec.
414.1380 an expanded definition of what is acceptable for recognition
as a certified-patient centered medical home or comparable specialty
practice. We are recognizing a MIPS eligible clinician or group as
being a certified patient-centered medical home or comparable specialty
practice if they have achieved certification or accreditation as such
from a national program, or they have achieved certification or
accreditation as such from a regional or state program, private payer
or other body that certifies at least 500 or more practices for
patient-centered medical home accreditation or comparable specialty
practice certification. Examples of nationally recognized accredited
patient-centered medical homes are: (1) The Accreditation Association
for Ambulatory Health Care; (2) the National Committee for Quality
Assurance (NCQA) Patient-Centered Medical Home (3) The Joint Commission
Designation; or (4) the Utilization Review Accreditation Commission
(URAC). We are finalizing that the criteria for being a nationally
recognized accredited patient-centered medical home are that it must be
national in scope and must have evidence of being used by a large
number of medical organizations as the model for their patient-centered
medical home. We will also provide full credit for the improvement
activities performance category for a MIPS eligible clinician or group
that has received certification or accreditation as a patient-centered
medical home or comparable specialty practice from a national program
or from a regional or state program, private payer or other body that
administers patient-centered medical home accreditation and certifies
500 or more practices for patient-centered medical home accreditation
or comparable specialty practice certification.
(3) Improvement Activities Data Submission Criteria
(a) Submission Mechanisms
For the purpose of submitting under the improvement activities
performance category, we proposed in the proposed rule (81 FR 28181) to
allow for submission of data for the improvement activities performance
category using the qualified registry, EHR, QCDR, CMS Web Interface,
and attestation data submission mechanisms. If technically feasible, we
would use administrative claims data to supplement the improvement
activities submission. Regardless of the data submission method, all
MIPS eligible clinicians or groups must select activities from the
improvement activities inventory provided in Table H in in the Appendix
to this final rule with comment period. We believe the proposed data
submission methods would allow for greater access and ease in
submitting data, as well as consistency throughout the MIPS program.
In addition, we proposed at Sec. 414.1360, that for the transition
year only, all MIPS eligible clinicians or groups, or third party
intermediaries such as health IT intermediaries, QCDRs and qualified
registries that submit on behalf of a MIPS eligible clinician or group,
must designate a yes/no response for activities on the improvement
activities inventory. In the case where a MIPS eligible clinician or
group is using a health IT intermediary, QCDR, or qualified registry
for their data submission, the MIPS eligible clinician or group will
certify all improvement activities have been performed and the health
IT intermediary, QCDR, or qualified registry will submit on their
behalf. An agreement between a MIPS eligible clinician or group and a
health IT vendor, QCDR, or qualified registry for data submission for
improvement activities as well as other performance data submitted
outside of the improvement activities performance category could be
contained in a single agreement, minimizing the burden on the MIPS
eligible clinician or group. See the proposed rule (81 FR 28281) for
additional details.
We proposed to use the administrative claims method, if technically
feasible, only to supplement improvement activities performance
category submissions. For example, if technically feasible, MIPS
eligible clinicians or groups, using the telehealth modifier GT, could
get automatic credit for this activity. We requested comments on these
proposals.
The following is a summary of the comments we received regarding
the improvement activities performance category data submission
criteria and mechanisms.
Comment: Some commenters noted that the definitions of some
improvement activities (such as those that require patient-specific
factors) are impossible for CEHRTs to determine from the data in the
EHR. The commenters believed these will create usability problems and
complicate clinical workflows.
Response: If an EHR vendor or developer cannot complete system
changes to support usability and simplify clinical workflows for some
improvement activities, a MIPS eligible clinician or group may use
another calculation method to support that attestation. For example, a
MIPS eligible clinician or group may use their CEHRT to generate a list
of patients for whom they have prescribed an antidiabetic agent (for
example, insulin) and use an associated documented record with
reference to an individual glycemic treatment goal that includes
patient-specific factors to identify the competition rate through
manual or other IT assisted calculation. We also encourage MIPS
eligible clinicians to work with their CEHRT system developers to
ensure that their systems consider the MIPS eligible clinician's
workflow needs. In addition, we note that ONC recently relied an EHR
Contract Guide, available at https://www.healthit.gov/sites/default/files/EHR_Contracts_Untangled.pdf, which is designed to help clinicians
and developers work together to consider key issues related to product
needs and product operation.
Comment: One commenter opposed separate processes for attesting
improvement activities when those activities are related to advancing
care information or quality measures performance categories.
Response: For the transition year of MIPS, we have concluded that
we must require separate processes for attestation in separate
performance categories, including cases where improvement activities
are related to advancing care information or quality performance
categories. Refer to section II.E.5.g. and Table H in in the Appendix
to this final rule with comment period for more information on
improvement activities that are designated activities which receive a
10 percent bonus in the advancing care information performance
category. MIPS eligible clinicians should factor this 10 percent bonus
into their selection of activities to meet the requirements of the
improvement activities performance category as well. We intend to
continue to streamline reporting requirements under MIPS in the future.
For the advancing care information performance category, however, we
have revised the policy for the transition year of MIPS, so that
additional designated activities in Table H in in the Appendix to this
final rule with comment period may also qualify for a bonus in the
advancing care information performance category. We refer readers to
section II.E.5.g.(5) of this final rule with comment period for more
information on this bonus; MIPS eligible clinicians should factor this
into their selection of activities to meet the requirements of the
improvement activities performance category as well.
[[Page 77181]]
We intend to continue examining how to streamline reporting
requirements under MIPS in the future.
Comment: Several commenters requested additional clarification on
how MIPS eligible clinicians would report as a group for the
improvement activities performance category. The commenters provided
suggestions for how CMS should provide credit for those groups,
including suggestions: (1) That CMS not require all MIPS eligible
clinicians in a group to report all activities in the transition year;
(2) that CMS specify how many clinicians in each group must participate
in each activity to achieve points for the entire group; and (3) that
CMS give credit to the entire group if at least part of a group is
performing an activity.
Response: We would like to explain that all MIPS eligible
clinicians, reporting as a group, will receive the same score for the
improvement activities performance category. If at least one clinician
within the group is performing the activity for a continuous 90 days in
the performance period, the group may report on that activity.
Comment: A few commenters expressed concern with the improvement
activities performance category noting that it will be necessary to
have timely specifications on how to satisfy the qualifications for
each activity to earn improvement activities credit.
Response: The improvement activities inventory in Table H in in the
Appendix to this final rule with comment period includes a description
of the specifications for how to satisfy the qualifications for each
project (activity) in order to earn points.
Comment: Some commenters requested clarification on the submission
mechanisms for the improvement activities performance category. The
commenters believed that some activities require use of a third party
vendor while others did not. The commenter stated it is unclear how
MIPS eligible clinicians will report on activities within the
improvement activities performance category.
Response: The submission mechanisms for the improvement activities
performance category are listed in section II.E.5.f.(3) of this final
rule with comment period. We agree there are some activities such as
those that reference the use of a QCDR that may require a third party
vendor. There are many others, however, that do not require third party
vendor engagement or suggest that use of certified EHR technology is
one way to support a given activity but not the only way to support an
activity. We will provide technical assistance through subregulatory
guidance to further explain how MIPS eligible clinicians will report on
activities within the improvement activities performance category. This
subregulatory guidance will also include how MIPS eligible clinicians
will be able to identify a specific activity through some type of
numbering or other similar convention.
Comment: One commenter requested clarification that if an EHR
vendor reports the improvement activities performance category for a
MIPS eligible clinician or group, the vendor is simply reporting the
MIPS eligible clinician's or group's attestation of success, not
attesting to that success.
Response: The commenter is correct in that the vendor simply
reports the MIPS eligible clinician's or group's attestation, on behalf
of the clinician or group, that the improvement activities were
performed. The vendor is not attesting on its own behalf that the
improvement activities were performed.
Comment: Another commenter recommended allowing improvement
activities to be reported via the CMS Web Interface for the transition
year, rather than through a QCDR or EHR.
Response: The CMS Web Interface is one of the data submission
mechanisms available for the improvement activities performance
category reporting. We have included a number of possible submission
mechanisms for MIPS and recognize the need to make the attestation
process as simple as possible.
Comment: One commenter recommended that CMS provide additional
clarity in the final rule with comment period on how MIPS eligible
clinicians should attest if they meet part, but not all, of the entire
improvement activity. In order to provide a more accurate and fair
score, this commenter recommended providing more prescriptive criteria
so that points may be assigned for sub-activities within each activity.
Response: A MIPS eligible clinician must meet all requirements of
the activity to receive credit for that activity. Partial satisfaction
of an activity is not sufficient for receiving credit for that
activity. However, many activities offer multiple options for how
clinicians may successfully complete them and additional criteria for
activities are already included in the improvement activities
inventory.
Comment: Some commenters supported CMS' proposed ``yes/no''
responses via reporting mechanisms of MIPS eligible clinicians' choice,
and requested that we consider collecting more detailed responses in
the future. Other commenters called on CMS to ensure that improvement
activities chosen by MIPS eligible clinicians are relevant and useful
for improving care in their practices. One commenter expressed
reservations about attestation and requested that CMS verify that MIPS
eligible clinicians perform the activities. Still others, however,
called on CMS to continue allowing flexibility for MIPS eligible
clinicians, including attestation options.
Response: We will continue examining changes in the data collection
process with the expectation that where applicable specification and
data collection may be added on an activity by activity basis. We will
also verify data through the data validation and audit process as
necessary.
Comment: One commenter recommended that the certifying boards be
included as reporting agents for improvement activities.
Response: We will take this suggestion into consideration for
future rulemaking. To the extent possible, we will work with the
patient-centered medical home and comparable specialty practice
certifying bodies and other certification boards to verify practice
status.
Comment: One commenter recommended that CMS align improvement
activities across the country to facilitate shared learning and prevent
against waste and inefficiency, and should create a ``single source''
option for clinicians for reporting, measurement benchmarking and
feedback, that also counts toward the improvement activities
performance category.
Response: We will take this suggestion into consideration for
future rulemaking.
After consideration of the comments received regarding the
improvement activities data submission criteria we are not finalizing
the policies as proposed. Specifically, we are not finalizing the data
submission method of administrative claims data to supplement the
improvement activities as it is not technically feasible at this time.
We are finalizing at Sec. 414.1360 to allow for submission of data
for the improvement activities performance category using the qualified
registry, EHR, QCDR, CMS Web Interface, and attestation data submission
mechanisms. Regardless of the data submission method, with the
exception of MIPS APMs, all MIPS eligible clinicians or groups must
select activities from the improvement activities inventory provided in
Table H in in the Appendix to this final rule with comment period.
[[Page 77182]]
In addition, we are finalizing at Sec. 414.1360 that for the
transition year of MIPS, all MIPS eligible clinicians or groups, or
third party intermediaries such as health IT vendors, QCDRs and
qualified registries that submit on behalf of a MIPS eligible clinician
or group, must designate a yes response for activities on the
improvement activities inventory. In the case where a MIPS eligible
clinician or group is using a health IT vendor, QCDR, or qualified
registry for their data submission, the MIPS eligible clinician or
group will certify all improvement activities have been performed and
the health IT vendor, QCDR, or qualified registry will submit on their
behalf.
We are also including a designation column in the improvement
activities inventory that will show which activities qualify for the
advancing care information bonus finalized at Sec. 414.1380 and refer
readers to Table H in in the Appendix to this final rule with comment
period.
(b) Weighted Scoring
While we considered both equal and differentially weighted scoring
in this performance category, the statute requires a differentially
weighted scoring model by requiring 100 percent of the potential score
in the improvement activities performance category for patient-centered
medical home participants, and a minimum 50 percent score for APM
participants. For additional activities in this category, we proposed
at Sec. 414.1380 a differentially weighted model for the improvement
activities performance category with two categories: Medium and high.
The justification for these two weights is to provide flexible scoring
due to the undefined nature of activities (that is, improvement
activities standards are not nationally recognized and there is no
entity for improvement activities that serves the same function as the
NQF does for quality measures). Improvement activities are weighted as
high based on alignment with our national public health priorities and
programs such as the Quality Innovation Network-Quality Improvement
Organization (QIN/QIO) or the Comprehensive Primary Care Initiative
which recognizes specific activities related to expanded access and
integrated behavioral health as important priorities. Programs that
require performance of multiple activities such as participation in the
Transforming Clinical Practice Initiative, seeing new and follow-up
Medicaid patients in a timely manner in the clinician's state Medicaid
Program, or an activity identified as a public health priority (such as
emphasis on anticoagulation management or utilization of prescription
drug monitoring programs) were weighted as high.
The statute references certified patient-centered medical homes as
achieving the highest score for the MIPS program. MIPS eligible
clinicians or groups may use that to guide them in the criteria or
factors that should be taken into consideration to determine whether to
weight an activity medium or high. We requested comments on this
proposal, including criteria or factors we should take into
consideration to determine whether to weight an activity medium or
high.
The following is a summary of the comments we received regarding
weighted scoring for improvement activities.
Comment: One commenter recommended that we establish three
weighting categories for the improvement activities performance
category: (1) High--30 percent; (2) Medium--20 percent; and (3) Low--10
percent. The commenter stated that this weighting allocation would
allow for the development of a third category for easier improvement
activities.
Response: Generally, we received comments on the two weightings,
high and medium. We believe there were no activities that merited a
classification as a lower weighted activity during the MIPS transition
year. However, in future years, through the annual call for activities
and when more data are available on which activities are most
frequently reported, we will reevaluate the applicability of these
weights and potential reclassification of activities into lower
weights.
Comment: Commenters noted an inconsistency regarding the weighting
of activities related to the Prescription Drug Monitoring Program
(PDMP). Section II.E.5.f.(3)(b) of the proposed rule (81 FR 28261)
references this as a high priority activity; however, the PDMP related
activity, ``Annual registration in the Prescription Drug Monitoring
Program'' in Table H, in the Appendix of this final rule with comment
period is listed as a medium-weighted activity (81 FR 28570).
Response: There are two PDMP activities, one with a medium weight-
registering for the PDMP-and one with a high weight-utilizing the PDMP.
We had added some additional language to the one PDMP activity with the
high weight to differentiate it from the other medium-weighted PDMP
activity. We refer readers to Table H in in the Appendix to this final
rule with comment period for the additional language.
Comment: Several commenters supported the proposed list of
activities but recommended that the number of required activities be
reduced and that more activities be highly weighted to reduce the
reporting burden for MIPS eligible clinicians.
Response: As discussed in section II.E.5.f.(2) of this final rule
with comment period, we have reduced the number of activities that MIPS
eligible clinicians are required to report to no more than four medium-
weighted activities, two high-weighted activities, or any combination
thereof, for a total of 40 points. We are reducing the number of
activities for small practices, practices located in rural areas, and
geographic HSPAs and non-patient facing MIPS eligible clinicians to no
more than one high-weighted activity or two medium-weighted activities,
where each activity counts for doubled weighting to also achieve a
total of 40 points.
Comment: Several commenters suggested that CMS expand the number of
high-weighted activities, noting that there were only 11 high-weighted
activities out of 90, which may prevent MIPS eligible clinicians from
reporting high-weighted improvement activities, and that the Emergency
Response and Preparedness subcategory was the only subcategory with
without a high-weighted activity.
Response: We are changing one existing activity in the Emergency
Response and Preparedness subcategory from ``Participation in domestic
or international humanitarian volunteer work. MIPS eligible clinicians
and groups must be registered for a minimum of 6 months as a volunteer
for domestic or international humanitarian volunteer work'' to
``Participation in domestic or international humanitarian volunteer
work. Activities that simply involve registration are not sufficient.
MIPS eligible clinicians attest to domestic or international
humanitarian volunteer work for a period of a continuous 60 days or
greater.'' We have changed this activity so that rather than requiring
MIPS eligible clinicians to be registered for 6 months, we are
requiring them to participate for 60 days. This change is in line with
our overall new 90-day performance period policy. The 60-day
participation would fall within that new 90-day window. We are also
changing this existing activity from a medium to a high-weighted
activity because such volunteer work is intensive, often involves
travel, and working in challenging physical and clinical circumstances.
Table H in in the
[[Page 77183]]
Appendix to this final rule with comment period reflects this revised
description of the existing activity and revised weighting. We note,
however, that this is a change for this transition year for the 2017
performance period only. In addition, we are changing the weight from
medium to high of the one activity related to ``Participating in a
Rural Health Clinic (RHC), Indian Health Service Medium Management
(IHS), or Federally Qualified Health Center (FQHC) in ongoing
engagement activities that contribute to more formal quality
reporting'' which we believe is consistent with section
1848(q)(2)(B)(iii) of the Act, which requires the Secretary to give
consideration to the circumstances of practices located in rural areas
and geographic HPSAs. Rural health clinics would be included in that
definition for consideration of practices in rural areas. Table H in in
the Appendix to this final rule with comment period reflects this
revised weighting.
Comment: Some commenters recommended assigning a higher weight to
QCDR-related improvement activities and QCDR functions, and one
commenter recommended that use of a QCDR count for several activities.
Response: Participating in a QCDR is not sufficient for
demonstrating performance of multiple improvement activities, and we do
not believe at this time that it warrants a higher weighting. In
addition, QCDR participation was not proposed as a high-weighted
activity because, while useful for data collection, it is neither
critical for supporting certified patient-centered medical homes, which
is what we considered in proposing whether an improvement activity
would be high-weighted activity, nor does it require multiple actions.
We also note that while QCDR participation may not automatically confer
improvement activities credit, it may put MIPS eligible clinicians in a
position to report multiple improvement activities, since there are
several that specifically reference QCDR participation. We ask that
each MIPS eligible clinician select from the broad list of activities
provided in Table H in in the Appendix to this final rule with comment
period in order to achieve their total score.
Comment: Several commenters made suggestions for weighting within
the improvement activities performance category. Some commenters
recommended that CMS increase the number of high weight activities
because they believed this would allow MIPS eligible clinicians to
select activities that are more meaningful without sacrificing time and
energy that should be spent with patients. Other commenters offered
suggestions for additional activities that should be allocated high
weight under the performance category, or suggested consolidating
activities under subcategories that could be afforded high weight.
Response: Additional reweighting, other than included in this final
rule with comment period, will not occur until a revised improvement
activities inventory list is finalized through the rulemaking process.
We will take this recommendation into consideration for future
rulemaking.
Comment: Some commenters made several suggestions for providing
additional credit to MIPS eligible clinicians under the improvement
activities performance category. For example, one commenter recommended
giving automatic credit to surgeons for providing 24/7 access to MIPS
eligible clinicians, groups, or care teams for advice about urgent or
emergent care because surgeons provide on-call coverage and are
available to medical facilities that provide after-hours access. Other
commenters suggested that specialists that qualify for additional
credit under the Blue Cross Blue Shield of Michigan Value-Base
Reimbursement program should receive full credit for improvement
activities performance category. Additional commenters suggested that
we consider providing automatic credit for the improvement activities
performance category to MIPS eligible clinicians participating in a
QCDR rather than requiring attestation for each individual improvement
activity. One commenter recommended that ED clinicians automatically
earn at least a minimum score of one-half of the highest potential
score for this performance category simply for providing this access on
an ongoing basis, noting that emergency clinicians are one of the few
clinician specialties that truly provide 24/7 care.
Response: We will consider these requests in future rulemaking for
the MIPS program. As discussed in section II.E.f.(3)(c) of this final
rule with comment period, we are revising our policy regarding the
number of required activities for the transition year of MIPS.
Specifically, we are asking MIPS eligible clinicians or groups that are
not MIPS APMs, to select a reduced number of activities: Either four
medium-weighted activities, or two medium-weighted and one high-
weighted, or two high-weighted activities. For MIPS eligible clinicians
or groups, in small practices, practices in rural areas or geographic
HPSAs, or non-patient facing MIPS eligible clinicians, who are only
required to select one medium-weighted activity for one-half of the
credit for this performance category or two medium-weighted or one
high-weighted activity for full credit for this performance category.
Comment: Some commenters requested that the CAHPS for MIPS survey
be included as a medium-weighted improvement activity.
Response: We disagree and believe assessing patients' experiences
as they interact with the health care system is a valuable indication
of merit. Please note, there are no reporting thresholds for
improvement activities, this allows flexibility for MIPS eligible
clinicians and groups to report surveys in a way that best reflects
their efforts. Therefore, the CAHPS for MIPS survey is included as a
high-weighted activity under the activity called ``Participation in the
Consumer Assessment of Healthcare Providers and Systems Survey (CAHPS)
or other Supplemental Questionnaire Items.''
Comment: Some commenters supported patient-centered medical homes
and supported these entities receiving full credit for improvement
activities performance category. One commenter suggested that patient-
centered medical homes stratify data by disparity variables and
implement targeted interventions to address health disparities. Some
commenters were concerned that groups of less than 50 would receive the
highest potential score under the improvement activities performance
category, while groups with greater than 50 would receive partial
credit. One commenter stated that larger groups have the inherent
capability of assuming greater risk. One commenter also requested that
the 50 group number be stricken from the language allowing any group
size that has acquired patient-centered medical home certification by a
recognized entity to be given full credit for improvement activities to
encourage all groups, regardless of size, to pursue patient-centered
medical home certification as patient-centered medical home
certification is fundamental to good practice. Additional commenters
suggested including activities under the improvement activities
performance category that are associated with actions conducted by a
certified patient-centered medical home. One commenter recommended the
following subcategories of activities for the improvement activities
performance category that are aligned with elements of a patient
centered medical home: Expanded practice access, population management,
care coordination, beneficiary engagement, and patient
[[Page 77184]]
safety and practice assessment. This commenter believed that the
presentation of the information in this way will allow clinicians to
better understand the patient-centered medical home model and decide
how to best deliver care under MIPS.
Response: We note that there is no limit on the size of a practice
in a patient-centered medical home for eligibility for full improvement
activities credit. We refer the commenter to section II.E.8. of this
final rule with comment period on APMs to establishing thresholds of
less than 50 as it relates to APM incentive payments. We encourage MIPS
eligible clinicians and groups to working with appropriate certifying
bodies to consider that in the future. We will also look for ways to
reorganize the existing improvement activities inventory and working
with clinicians and others in future years on the best way to present
this list of activities.
Comment: A few commenters supported giving 50 percent credit in the
improvement activities performance category to MIPS APMs.
Response: It is important to note that it was statutorily mandated
that MIPS eligible clinicians participating in APMs receive at least
one-half of the highest score in the improvement activities performance
category.
Comment: Other commenters recommended that we establish three
weighting categories for the improvement activities performance
category: (1) High--30 percent; (2) medium--20 percent; and (3) low--10
percent. The commenter stated that this weighting allocation would
allow for the development of a third category for easier improvement
activities.
Response: We will consider other weighting options as appropriate
for improvement activities in future rulemaking.
After consideration of the comments regarding weighted scoring we
are finalizing at Sec. 414.1380 a differentially weighted model for
the improvement activities performance category with two categories:
Medium and high. We refer readers to the following sections of this
final rule with comment period in reference to the improvement
activities performance category: Section VI.H for the modified list of
high-weighted and medium-weighted activities, section II.E.5.f.(3)(c)
for information on the number of activities required to achieve the
highest score, section II.E.6.a.(4)(a) for information on how points
will be assigned, section II.E.6.a.(4)(b) how the highest potential
score can be achieved, section II.E.6.a.(4)(c) on how we will recognize
a MIPS eligible clinician or group for qualifying for the points for a
certified patient-centered medical home or comparable specialty
practices, and section II.E.6.a.(4)(d) for how the improvement
performance activities will be calculated.
(c) Submission Criteria
We proposed at Sec. 414.1380 to set the improvement activities
submission criteria under MIPS, to achieve the highest potential score
of 100 percent, at three high-weighted improvement activities (20
points each) or six medium-weighted improvement activities (10 points
each), or some combination of high and medium-weighted improvement
activities to achieve a total of 60 points for MIPS eligible clinicians
participating as individuals or as groups (refer to Table H in in the
Appendix to this final rule with comment period for improvement
activities and weights). MIPS eligible clinicians or groups that select
less than the designated number of improvement activities will receive
partial credit based on the weighting of the improvement activity
selected. To achieve a 50 percent score, one high-weighted and one
medium-weighted improvement activity or three medium-weighted
improvement activities are required for these MIPS eligible clinicians
or groups.
Exceptions to the above apply for: Small practices, MIPS eligible
clinicians and groups located in rural areas, MIPS eligible clinicians
and groups that are located in geographic HPSAs, non-patient facing
MIPS eligible clinicians or groups or MIPS eligible clinicians, or
groups that participate in an APM or a patient-centered medical home
submitting in MIPS.
For MIPS eligible clinicians and groups that are small practices,
located in rural areas or geographic HPSAs, or non-patient facing MIPS
eligible clinicians or groups, to achieve the highest score of 100
percent, two improvement activities are required (either medium or
high). For MIPS eligible clinicians or groups that are small practices,
located in rural areas, located in HPSAs, or non-patient facing MIPS
eligible clinicians or groups, in order to achieve a 50 percent score,
one improvement activity is required (either medium or high).
MIPS eligible clinicians or groups that participate in APMs are
considered eligible to participate under the improvement activities
performance category unless they are participating in an Advanced APM
and they have met the Qualifying APM Participant (QP) thresholds or are
Partial QPs that elect not to report information. A MIPS eligible
clinician or group that is participating in an APM and participating
under the improvement activities performance category will receive one
half of the total improvement activities score just through their APM
participation. These are MIPS eligible clinicians or groups that we
identify as participating in APMs for MIPS and may participate under
the improvement activities performance category. To achieve the total
improvement activities score, such MIPS eligible clinicians or groups
will need to identify that they participate in an APM and this APM will
submit the eligible clinicians' improvement activities score for that
specific model type.
For further description of MIPS eligible clinicians or groups that
are required to report to MIPS under the APM scoring standard and their
improvement activities scoring requirements, we refer readers to the
proposed rule (81 FR 28234). For all other MIPS eligible clinicians or
groups participating in APMs that would report to MIPS, this section
applies and we also refer readers to the scoring requirements for these
MIPS eligible clinicians or groups in the proposed rule (81 FR 28237).
Since we cannot measure variable performance within a single
improvement activity, we proposed at Sec. 414.1380 to compare the
improvement activities points associated with the reported activities
against the highest number of points that are achievable under the
improvement activities performance category which is 60 points. We
proposed that the highest potential score of 100 percent can be
achieved by selecting a number of activities that will add up to 60
points. MIPS eligible clinicians and groups, including those that are
participating as an APM, and all those that select activities under the
improvement activities performance category can achieve the highest
potential score of 60 points by selecting activities that are equal to
the 60-point maximum. We refer readers to the scoring section of the
proposed rule (81 FR 28237) for additional rationale for using 60
points for the transition year of MIPS.
If a MIPS eligible clinician or group reports only one improvement
activity, we would score that activity accordingly, as 10 points for a
medium-level activity or 20 points for a high-level activity. If a MIPS
eligible clinician or group reports no improvement activities, then the
MIPS eligible clinician or group would receive a zero score for the
improvement activities performance category. We
[[Page 77185]]
believe this proposal allows us to capture variation in the total
improvement activities reported.
In addition, we believe these are reasonable criteria for MIPS
eligible clinicians or groups to accomplish within the transition year
for three reasons: (1) In response to several stakeholder MIPS and APMs
RFI comments, we are not recommending a minimum number of hours for
performance of an activity; (2) we are offering a broad list of
activities from which MIPS eligible clinicians or groups may select;
and (3) also in response to MIPS and APMs RFI comments, we proposed
that an activity must be performed for at least 90 days during the
performance period for improvement activities credit. We intend to
reassess this requirement threshold in future years. We do not believe
it is appropriate to require a determined number of activities within a
specific subcategory at this time. This proposal aligns with the
requirements in section 1848(q)(2)(C)(iii) of the Act that states MIPS
eligible clinicians or groups are not required to perform activities in
each subcategory.
Lastly, we recognize that working with a QCDR could allow a MIPS
eligible clinician or group to meet the measure and activity criteria
for multiple improvement activities. For the transition year of MIPS,
there are several improvement activities in the inventory that
incorporate QCDR participation. Each activity must be selected and
achieved separately for the transition year of MIPS. A MIPS eligible
clinician or group cannot receive credit for multiple activities just
by selecting one activity that includes participation in a QCDR. As the
improvement activities inventory expands over time we were interested
in receiving comments on what restrictions, if any, should be placed
around improvement activities that incorporate QCDR participation.
The following is a summary of the comments we received regarding
submission criteria.
Comment: One commenter recommended that CMS base performance in the
improvement activities performance category on participating in a
number of improvement activities rather than a specific number of
hours.
Response: We would like to explain that we proposed at Sec.
414.1380 to require MIPS eligible clinicians to submit three high-
weighted improvement activities or six medium-weighted improvement
activities, or some combination of high and medium-weighted improvement
activities to achieve the highest possible score in this performance
category (81 FR 28210). Credit awarded under the improvement activities
performance category relies on the number of activities, not a specific
number of hours. We refer readers to the section below entitled
``Required Period of Time for Performing an Activity'' below where we
discuss the 90-day time period policy.
Comment: Other commenters did not support the improvement
activities performance category because of some specialty concerns on
the inability to report on two or more activities, such as one
commenter that indicated that doctors of chiropractic practice in
clinics, often with under 15 MIPS eligible clinicians, would have
problems reporting on two improvement activities. This commenter noted
that during the early adopter program for the NCQA Patient-Centered
Connected Care recognition program, doctors of chiropractic did not
experience favorable consideration because the TCPIs focused their
funding on primary care clinicians.
Response: We believe there are a sufficient number of broad
activities from which specialty practices, as well as primary care
clinicians, can select. Furthermore, as discussed previously in this
section, we are finalizing a policy reducing the required number of
activities for MIPS eligible clinicians and groups.
After consideration of the comments received regarding the
submission criteria, we are not finalizing the policies as proposed.
Rather, we are reducing the maximum number of activities required to
achieve the highest possible score in this performance category.
Specifically, we are finalizing at Sec. 414.1380 to set the
improvement activities submission criteria under MIPS, to achieve the
highest potential score, at two high-weighted improvement activities or
4 medium-weighted improvement activities, or some combination of high
and medium-weighted improvement activities which will be less than four
total number of activities for MIPS eligible clinicians participating
as individuals or as groups (refer to Table H in in the Appendix to
this final rule with comment period for improvement activities and
weights).
Exceptions to the above apply for: Small practices, located in
rural areas, practices located in geographic HPSAs, non-patient facing
MIPS eligible clinicians or groups or MIPS eligible clinicians, or
groups that participate in a MIPS APM or a patient-centered medical
home submitting in MIPS. As discussed in sections II.E.5.h. and II.E.6.
of this final rule with comment period, we are reducing the maximum
number of activities required for these MIPS eligible clinicians and
groups to achieve the highest possible score in this performance
category.
Specifically, for MIPS eligible clinicians and groups that are
small practices, practices located in rural areas or geographic HPSAs,
or non-patient facing MIPS eligible clinicians or groups, to achieve
the highest score, one high-weighted or two medium-weighted improvement
activities are required. For these MIPS eligible clinicians and groups,
in order to achieve one-half of the highest score, one medium-weighted
improvement activity is required.
We will also provide full credit for the improvement activities
performance category for a MIPS eligible clinician or group that has
received certification or accreditation as a patient-centered medical
home or comparable specialty practice from a national program or from a
regional or state program, private payer or other body that administers
patient-centered medical home accreditation and certifies 500 or more
practices for patient-centered medical home accreditation or comparable
specialty practice certification.
We believe that this approach is appropriate for the transition
year of MIPS since this is a new performance category of requirements
for MIPS eligible clinicians and we want to ensure all MIPS eligible
clinicians understand what is required of them, while not being overly
burdensome.
All clinicians identified on the Participation List of an APM
receive at least one-half of the highest score. To develop the
improvement activities additional score assigned to all MIPS APMs, CMS
will compare the requirements of the specific APM with the list of
activities in the Improvement Activities Inventory in Table H in in the
Appendix to this final rule with comment period and score those
activities in the same manner that they are otherwise scored for MIPS
eligible clinicians according to section II.E.6.a.(4) of this final
rule with comment period. For further explanation of how MIPS APMs
scores will be calculated, we refer readers to section II.E.5.h of this
final rule with comment period. Should the MIPS APM not receive the
maximum improvement activities performance category score then the APM
entity can submit additional improvement activities. All other MIPS
eligible clinicians or groups that we identify as participating in APMs
will need to select additional improvement activities to achieve the
improvement activities highest score.
[[Page 77186]]
(d) Required Period of Time for Performing an Activity
We proposed Sec. 414.1360 that MIPS eligible clinicians or groups
must perform improvement activities for at least 90 days during the
performance period for improvement activities credit. We understand
there are some activities that are ongoing whereas others may be
episodic. We considered setting the threshold for the minimum time
required for performing an activity to longer periods up to a full
calendar year. However, after researching several organizations we
believe a minimum of 90 days is a reasonable amount of time. One
illustrative example of organizations that used 90 days as a window for
reviewing clinical practice improvements are practice improvement
activities undertaken by a large Veteran's Administration health care
program that set a 90-day window for reviewing improvements in the
management of opioid dispensing.\19\
---------------------------------------------------------------------------
\19\ Westanmo A, Marshall P, Jones E, Burns K, Krebs EE., Opioid
Dose Reduction in a VA Health Care System--Implementation of a
Primary Care Population-Level Initiative. Pain Med. 2015;16(5);1019-
26.
---------------------------------------------------------------------------
Additional clarification for how some activities meet the 90-day
rule or if additional time is required are reflected in the description
of that activity in Table H in in the Appendix to this final rule with
comment period. In addition, we proposed that activities, where
applicable, may be continuing (that is, could have started prior to the
performance period and are continuing) or be adopted in the performance
period as long as an activity is being performed for at least 90 days
during the performance period.
We anticipate in future years that extended improvement activities
time periods will be needed for certain activities. We will monitor the
time period requirement to assess if allowing for extended time
requirements may enhance the value associated with generating more
effective outcomes, or conversely, the extended time may reveal that
more time has little or no value added for certain activities when
associated with desired outcomes. We requested comments on this
proposal.
The following is a summary of the comments we received regarding
the required period of time for performing an activity.
Comment: Many commenters supported CMS's proposal to require
improvement activities performance for at least 90 days during the
performance period. Some commenters requested clarification about the
applicable time period, noting that not all activities in Table H in in
the Appendix to this final rule with comment period lend themselves to
a 90-day performance period. Other commenters suggested limiting
reporting to 30 days or other time periods shorter than 90 days to
enable MIPS eligible clinicians to test innovative strategies for
improvement activities. One commenter suggested requiring improvement
activities be performed throughout the entirety of the performance
period.
Response: We note that we are requiring that each improvement
activity be performed for a continuous 90-day period. Additionally, the
continuous 90-day period must occur during the performance period.
We do not believe that reporting periods as short as 30 days are
sufficient to ensure that the activities being performed are robust
enough to result in actual practice improvements. However, we are also
cognizant of the inherent challenges associated with implementing new
improvement activities, which is why we are finalizing our requirement
that these activities be performed during a continuous 90-day period
during the performance period. We view that reporting period as an
appropriate balance for the transition year of MIPS, and will re-
examine reporting periods for improvement activities in the future.
Comment: Several commenters requested further clarification on our
proposal regarding points for patient-centered medical home recognition
in the improvement activities performance category. Specifically, the
commenters requested clarification regarding what specific date, either
as of December 31, 2017 or as of January 1, 2017, by which a practice
needs to be recognized as a patient-centered medical home in order to
claim optimal improvement activities performance category points.
Response: We would like to explain that a MIPS eligible clinician
or group must qualify as a certified patient-centered medical home or
comparable specialty practice for at least a continuous 90 days during
the performance period. Therefore, any MIPS eligible clinician or group
that does not qualify by October 1st of the performance year as a
certified patient-centered medical home or comparable specialty
practice cannot receive automatic credit as such for the improvement
activities performance category.
Comment: Other commenters were very concerned that the required
90[hyphen]day reporting period for improvement activities was simply
inapplicable to many of the improvement activities listed by CMS in the
improvement activities inventory and in other cases that it is unclear
what needs to be done for 90 days. The commenters believed the time
period for improvement activities should be tailored to the particular
activity being implemented. In some cases, positive change could occur
in less than 90 days but even for activities with a longer time
horizon, a practice should receive credit for the improvement
activities as long as it is in place for a least one quarter. Another
commenter recommended that CMS assign timeframes for each improvement
activity for 2017, to gather empirical data regarding the time
intervals, instead of assigning a 90-day timeframe to all activities.
Response: While not all of the activities in the improvement
activities inventory lend themselves to performance for a full 90
consecutive days for all MIPS eligible clinicians, we believe that each
activity can be performed for a full 90 consecutive days by some, if
not all, MIPS eligible clinicians, and that there are a sufficient
number of activities included that any eligible clinician may select
and perform for a continuous 90 days that will allow them to
successfully report under this performance category. Therefore, we are
finalizing our proposal that for the transition year of MIPS, any
selected activity must be performed for at least 90 consecutive days.
After consideration of the comments regarding the required period
of time for performing an activity, we are finalizing at Sec. 414.1360
that MIPS eligible clinicians or groups must perform improvement
activities for at least 90 consecutive days during the performance
period for improvement activities performance category credit.
Activities, where applicable, may be continuing (that is, could have
started prior to the performance period and are continuing) or be
adopted in the performance period as long as an activity is being
performed for at least 90 days during the performance period.
(4) Application of Improvement Activities to Non-Patient Facing MIPS
Eligible Clinicians and Groups
We understand that non-patient facing MIPS eligible clinicians and
groups may have a limited number of measures and activities to report.
Therefore, we proposed at Sec. 414.1360 allowing non-patient facing
MIPS eligible clinicians and groups to report on a minimum of one
activity to achieve partial credit or two activities to achieve full
credit to meet the improvement activities submission criteria. These
non-patient facing MIPS eligible
[[Page 77187]]
clinicians and groups receive partial or full credit for submitting one
or two activities irrespective of any type of weighting, medium or high
(for example, two medium activities will qualify for full credit). For
scoring purposes, non-patient facing MIPS eligible clinicians or groups
receive 30 points per activity, regardless of whether the activity is
medium or high. For example, one high activity and one medium activity
could be selected to receive 60 points. Similarly, two medium
activities could also be selected to receive 60 points.
We anticipate the number of activities for non-patient facing MIPS
eligible clinicians or groups will increase in future years as we
gather more data on the feasibility of performing improvement
activities. As part of the process for identifying activities, we
consulted with several organizations that represent a cross-section of
non-patient facing MIPS eligible clinicians and groups. An illustrative
example of those consulted with include organizations that represent
cardiologists involved in nuclear medicine, nephrologists who serve
only in a consulting role to other clinicians, or pathologists who,
while they typically function as a team, have different members that
perform different roles within their specialty that are primarily non-
patient facing.
In the course of those discussions these organizations identified
improvement activities they believed would be applicable. The comments
on activities appropriate for non-patient facing MIPS eligible
clinicians or groups are reflected in the proposed improvement
activities inventory across multiple subcategories. For example,
several of these organizations suggested consideration for Appropriate
Use Criteria (AUC). As a result, we have incorporated AUC into some of
the activities. We encourage MIPS eligible clinicians or groups who are
already required to use AUC (for example, for advanced imaging) to
report an improvement activity other than one related to appropriate
use. Another example, under Patient Safety and Practice Assessment, is
the implementation of an antibiotic stewardship program that measures
the appropriate use of antibiotics for several different conditions
(Upper Respiratory Infection (URI) treatment in children, diagnosis of
pharyngitis, and bronchitis treatment in adults) according to clinical
guidelines for diagnostics and therapeutics. In addition, we requested
comments on what activities would be appropriate for non-patient facing
MIPS eligible clinicians or groups to add to the improvement activities
inventory in the future. We requested comments on this proposal.
The following is a summary of the comments we received regarding
the application of improvement activities to non-patient facing MIPS
eligible clinicians and groups.
Comment: Some commenters expressed their support for the general
approach of reducing the improvement activities performance category
requirements for non-patient facing MIPS eligible clinicians and
groups, as well as MIPS eligible clinicians practicing in rural areas
or health professional shortage areas. Other commenters disagreed with
that approach, stating that non-patient facing MIPS eligible clinicians
should be able to obtain a full score of 60 points without any special
modifications to improvement activities scoring while another commenter
did not support reducing the improvement activities performance
category requirements for these MIPS eligible clinicians and
recommended that we hold all clinicians to the same standard. Other
commenters suggested increasing the number of MIPS eligible clinicians
in a practice required to meet the definition of a small practice from
15 to 25 for purposes of the improvement activities performance
category. The commenters were also concerned that there are several
subcategories such as Beneficiary Engagement and Expanded Practice
Access that may limit non-patient facing MIPS eligible clinicians from
having access to a broader list of activities than other types of
practices and suggested that CMS limit the number of activities in the
transition year to two for non-patient facing MIPS eligible clinicians.
Response: We believe there are several subcategories such as
Beneficiary Engagement and Expanded Practice Access that may limit a
non-patient facing MIPS eligible clinician from having access to the
broader list of activities than for other types of practices and
believe it is reasonable to limit the number of activities in the
transition year for non-patient facing MIPS eligible clinicians. We
refer readers to Sec. 414.1305 for the definition of small practice
for the purposes of MIPS.
After consideration of the comments regarding the application of
improvement activities to non-patient facing MIPS eligible clinicians
and groups we are not finalizing the policies as proposed. Rather,
based on commenters' feedback, we believe that it is appropriate to
reduce the number of activities that a non-patient facing MIPS eligible
clinician must select to achieve credit to meet the improvement
activities data submission criteria. Specifically, we are finalizing at
Sec. 414.1380 that for non-patient facing MIPS eligible clinicians or
groups, to achieve the highest score one high-weighted or two medium-
weighted improvement activities are required. For these MIPS eligible
clinicians and groups, in order to achieve one-half of the highest
score, one medium-weighted improvement activity is required.
(5) Special Consideration for Small, Rural, or Health Professional
Shortage Areas Practices
Section 1848(q)(2)(B)(iii) of the Act requires the Secretary, in
establishing improvement activities, to give consideration to small
practices and practices located in rural areas as defined at Sec.
414.1305 and in geographic based HPSAs as designated under section
332(a)(1)(A) of the Public Health Service Act. In the MIPS and APMs
RFI, we requested comments on how improvement activities should be
applied to MIPS eligible clinicians or groups in small practices, in
rural areas, and geographic HPSAs: if a lower performance requirement
threshold or different measures should be established that will better
allow those MIPS eligible clinicians or groups to perform well in this
performance category, what methods should be leveraged to appropriately
identify these practices, and what best practices should be considered
to develop flexible and adaptable improvement activities based on the
needs of the community and its population.
We engaged high performing organizations, including several rural
health clinics with 15 or fewer clinicians that are designated as
geographic HPSAs, to provide feedback on relevant activities based on
their specific circumstances. Some examples provided include
participation in implementation of self-management programs such as for
diabetes, and early use of telemedicine, as in the one case for a top
performing multi-specialty rural practice that covers 20,000 people
over a 25,000-mile radius in a rural area of North Dakota. Comments on
activities appropriate for MIPS eligible clinicians or groups located
in rural areas or practices that are designated as geographic HPSAs are
reflected in the proposed improvement activities inventory across
multiple subcategories.
After consideration of comments and listening sessions, we proposed
at Sec. 414.1360 to accommodate small practices and practices located
in rural areas, or geographic HPSAs for the
[[Page 77188]]
improvement activities performance category by allowing MIPS eligible
clinicians or groups to submit a minimum of one activity to achieve
partial credit or two activities to achieve full credit. These MIPS
eligible clinicians or groups receive partial or full credit for
submitting two activities of any type of weighting (for example, two
medium activities will qualify for full credit). We anticipate the
requirement on the number of activities for small practices and
practices located in rural areas, or practices in geographic HPSAs will
increase in future years as we gather more data on the feasibility of
small practices and practices located in rural areas, and practices
located in geographic HPSAs to perform improvement activities.
Therefore, we requested comments on what activities would be
appropriate for these practices for the improvement activities
inventory in future years.
The following is a summary of the comments we received regarding
special consideration for MIPS small practices, or practices located in
rural areas or geographic HPSAs.
Comment: Some commenters requested that to facilitate rapid
learning in the area of improvement activities performance category,
CMS should provide targeted, practical technical assistance to solo and
small practices that is focused on the improvement activities tailored
to their level of quality improvement activity.
Response: We intend to provide targeted, practical technical
assistance to MIPS eligible clinicians. Specifically, we intend to have
a MACRA technical assistant that will be available to solo and small
practices. In addition, MIPS eligible clinicians may contact the
Quality Payment Program Service Center with specific questions.
Comment: Some commenters proposed that CMS recognize improvement
efforts for clinicians in small practices by awarding them ``full
credit'' in the improvement activities for participation in a Practice
Transformation Network.
Response: Please note that Transforming Clinical Practice
Initiative (TCPI) credit which includes activities such as a Practice
Transformation Network is provided as a high-weighted activity for the
transition year of MIPS.
After consideration of the comments regarding special consideration
for small practices, rural, or geographic HPSAs practices we are not
finalizing the policies as proposed. Rather, based on stakeholders'
feedback, we believe that it is appropriate to reduce the required
number of activities required to achieve full credit in this
performance category for small practices, rural, or health professional
shortage areas practices. Specifically, we are finalizing at Sec.
414.1380 that for MIPS eligible clinicians and groups that are small
practices or located in rural areas, or geographic HPSAs, to achieve
full credit, one high-weighted or two medium-weighted improvement
activities are required. In addition, we are modifying our proposed
definition of rural area and finalizing at Sec. 414.1305 that a rural
area means clinicians in zip codes designated as rural, using the most
recent HRSA Area Health Resource File data set available. We proposed
using HRSA's 2014-2015 Area Resource File but decided a non-specific
reference would be more broadly applicable. In addition, we are
finalizing the following definitions, as proposed, at Sec. 414.1305:
(1) small practices means practices consisting of 15 or fewer
clinicians and solo practitioners; and (2) Health Professional Shortage
Areas (HPSA) means areas as designated under section 332(a)(1)(A) of
the Public Health Service Act.
We refer readers to section II.E.6.a.(4) of this final rule with
comment period for a more detailed explanation of the number of points
and scoring for the improvement activities performance category.
(6) Improvement Activities Subcategories
Section 1848(q)(2)(B)(iii) of the Act provides that the improvement
activities performance category must include at least the subcategories
listed below. The statute also provides the Secretary discretion to
specify additional subcategories for the improvement activities
performance category, which have also been included below.
Expanded practice access, such as same day appointments
for urgent needs and after-hours access to clinician advice.
Population management, such as monitoring health
conditions of individuals to provide timely health care interventions
or participation in a QCDR.
Care coordination, such as timely communication of test
results, timely exchange of clinical information to patients and other
MIPS eligible clinicians or groups, and use of remote monitoring or
telehealth.
Beneficiary engagement, such as the establishment of care
plans for individuals with complex care needs, beneficiary self-
management assessment and training, and using shared decision-making
mechanisms.
Patient safety and practice assessment, such as through
the use of clinical or surgical checklists and practice assessments
related to maintaining certification.
Participation in an APM, as defined in section
1833(z)(3)(C) of the Act.
In the MIPS and APMs RFI, we requested recommendations on the
inclusion of the following five potential new subcategories:
Promoting Health Equity and Continuity, including (a)
serving Medicaid beneficiaries, including individuals dually eligible
for Medicaid and Medicare, (b) accepting new Medicaid beneficiaries,
(c) participating in the network of plans in the Federally Facilitated
Marketplace or state exchanges, and (d) maintaining adequate equipment
and other accommodations (for example, wheelchair access, accessible
exam tables, lifts, scales, etc.) to provide comprehensive care for
patients with disabilities.
Social and Community Involvement, such as measuring
completed referrals to community and social services or evidence of
partnerships and collaboration with the community and social services.
Achieving Health Equity, such as for MIPS eligible
clinicians or groups that achieve high quality for underserved
populations, including persons with behavioral health conditions,
racial and ethnic minorities, sexual and gender minorities, people with
disabilities, people living in rural areas, and people in geographic
HPSAs.
Emergency preparedness and response, such as measuring
MIPS eligible clinician or group participation in the Medical Reserve
Corps, measuring registration in the Emergency System for Advance
Registration of Volunteer Health Professionals, measuring relevant
reserve and active duty uniformed services MIPS eligible clinician or
group activities, and measuring MIPS eligible clinician or group
volunteer participation in domestic or international humanitarian
medical relief work.
Integration of primary care and behavioral health, such as
measuring or evaluating such practices as: Co-location of behavioral
health and primary care services; shared/integrated behavioral health
and primary care records; or cross-training of MIPS eligible clinicians
or groups participating in integrated care. This subcategory also
includes integrating behavioral health with primary care to address
substance use disorders or other behavioral health conditions, as well
as integrating mental health with primary care.
We recognize that quality improvement is a critical aspect of
[[Page 77189]]
improving the health of individuals and the health care delivery system
overall. We also recognize that this will be the first time MIPS
eligible clinicians or groups will be measured on the quality
improvement work on a national scale. We have approached the
improvement activities performance category with these principles in
mind along with the overarching principle for the MIPS program that we
are building a process that will have increasingly more stringent
requirements over time.
Therefore, for the transition year of MIPS, we proposed at Sec.
414.1365 that the improvement activities performance category include
the subcategories of activities provided at section 1848(q)(2)(B)(iii)
of the Act. In addition, we proposed at Sec. 414.1365 adding the
following subcategories: ``Achieving Health Equity,'' ``Integrated
Behavioral and Mental Health,'' and ``Emergency Preparedness and
Response.'' In response to multiple MIPS and APMs RFI comments
requesting the inclusion of ``Achieving Health Equity,'' we proposed to
include this subcategory because: (1) It is important and may require
targeted effort to achieve and so should be recognized when
accomplished; (2) it supports our national priorities and programs,
such as Reducing Health Disparities; and (3) it encourages ``use of
plans, strategies, and practices that consider the social determinants
that may contribute to poor health outcomes.'' (CMS, Quality Innovation
Network Quality Improvement Organization Scope of Work: Excellence in
Operations and Quality Improvement, 2014).
Similarly, MIPS and APMs RFI comments supported the inclusion of
the subcategory of ``Integrated Behavioral and Mental Health,'' citing
that ``statistics show 50 percent of all behavioral health disorders
are being treated by primary care and behavioral health integration.''
Additionally, according to MIPS and APMs RFI comments, behavioral
health integration with primary care is already being implemented in
numerous locations throughout the country. The third additional
subcategory we proposed to include is ``Emergency Preparedness and
Response,'' based on MIPS and APMs RFI comments that encouraged us to
consider this subcategory to help ensure that practices remain open
during disaster and emergency situations and support emergency response
teams as needed. Additionally, commenters were able to provide a
sufficient number of recommended activities (that is, more than one)
that could be included in the improvement activities inventory in all
of these proposed subcategories and the subcategories included under
section 1848(q)(2)(B)(iii) of the Act.
We also solicited public comments on two additional subcategories
for future consideration:
Promoting Health Equity and Continuity, including (a)
serving Medicaid beneficiaries, including individuals dually eligible
for Medicaid and Medicare, (b) accepting new Medicaid beneficiaries,
(c) participating in the network of plans in the Federally Facilitated
Marketplace or state exchanges, and (d) maintaining adequate equipment
and other accommodations (for example, wheelchair access, accessible
exam tables, lifts, scales, etc.) to provide comprehensive care for
patients with disabilities; and
Social and Community Involvement, such as measuring
completed referrals to community and social services or evidence of
partnerships and collaboration with community and social services.
For these two subcategories, we requested activities that can
demonstrate some improvement over time and go beyond current practice
expectations. For example, maintaining existing medical equipment would
not qualify for an improvement activity, but implementing some improved
clinical workflow processes that reduce wait times for patients with
disabilities or improve coordination of care including activities that
regularly provide additional assistance to find other care needed for
patients with disabilities, would be some examples of activities that
could show improvement in clinical practice over time.
We requested comments on these proposals.
The following is summary of the comments we received regarding
improvement activities subcategories.
Comment: Some commenters recommended inclusion of activities under
the two additional subcategories; Promoting Health Equity and Social
and Community Involvement. One commenter suggested we include the ASCO/
CNS Chemotherapy Safety Administration Standards, potentially under the
achieving health equity subcategory, with the highest weight. Other
commenters recommended we include the following activities in this
subcategory: Adhering to the U.S. Access Board standards for medical
diagnostic equipment; reduced wait time for patients with disabilities
for whom long wait times are a barrier to care; replacing inaccessible
equipment; remodeling or redesigning an office to meet accessibility
standards in areas other than medical diagnostic equipment, and
training staff on best practices in serving people with disabilities,
including appropriate appointment lengths, person-centered care, and
disability etiquette. The commenters also suggested that CMS include
people with disabilities in the subcategory of expanded practice
access, stating that despite the Americans with Disabilities Act (ADA),
many clinician offices remain inaccessible to people with disabilities.
One commenter recommended that for this subcategory, CMS require
both MIPS eligible clinicians and community service clinicians to
demonstrate improvement in their respective functions, processes, or
outcomes and consider developing metrics to evaluate the quality of
health and well-being services that community-based organizations
provide. Another commenter recommended that activities in the Social
and Community Involvement subcategory include employing community
health workers (CHWs) or integrating CHWs employed by community-based
organizations into care teams, establishing a community advisory
council, and creating formal linkages with social services clinicians
and community-based organizations.
Response: We will proceed with the current proposed list of
subcategories included in Table H in in the Appendix to this final rule
with comment period, as well as the subcategory for participation in an
APM, for the transition year of MIPS. We will consider these
recommendations in future years as part of the annual call for measures
and activities in future rulemaking.
Comment: A few commenters recommended that in order to encourage
and allow MIPS eligible clinicians to proactively incorporate and test
new technologies into their practice, while closely sharing the
decision making process with patients, CMS should develop an additional
improvement activities subcategory to encourage MIPS eligible
clinicians and groups to engage patients to consider new technologies
that may be an option for their care.
Response: These recommendations will be considered during the call
for activities and addressed in future rulemaking as necessary.
Comment: Some commenters stated general support for the improvement
activities performance category, including efforts to benefit long-term
care, and the inclusion of the subcategories of Achieving Health Equity
and Integration of Behavioral and Mental Health.
[[Page 77190]]
Response: We have included the Achieving Health Equity and
Integration of Behavioral and Mental Health subcategories.
Comment: Other commenters recommended that CMS group similar
activities together to reduce complexity and confusion, and provided an
example to move all QCDR activities under the Population Health
Management subcategory so MIPS eligible clinicians can easily determine
which capabilities they already have or may adopt with use of a QCDR.
Response: We believe that we have appropriately placed activities
within their subcategories as proposed. However, we would like to note
that we are committed to ease of reporting and we allow MIPS eligible
clinicians to report across all subcategories. We will provide
technical assistance through the Quality Payment Program Service Center
and other resources.
Comment: One commenter requested the ability to select an activity
across any subcategory.
Response: We are finalizing our proposed policy that MIPS eligible
clinicians may select any activity across any improvement activities
subcategory, as our intention is to provide as much flexibility for
MIPS eligible clinicians as possible. We believe that where possible,
MIPS eligible clinicians should choose activities that are most
important or most appropriate for their practice across any
subcategory.
Comment: Many commenters supported CMS's flexibility in recognizing
a broad range of improvement activities performance category for Care
Coordination, Beneficiary Engagement, and Patient Safety and
recommended that CMS include a fourth subcategory that allows practices
to focus on office efficiency/operations in order to promote long term
success. Some commenters also requested that CMS include two additional
subcategories; Promoting Health Equity and Continuity and Social and
Community Involvement.
Response: We will proceed with the current proposed list of
subcategories for the transition year of MIPS, included in Table H in
in the Appendix to this final rule with comment period, as well as the
subcategory for participation in an APM. Further determinations of
improvement activities and subcategories will be addressed in future
rulemaking and as part of the annual call for the subcategory and
activities process that will occur simultaneously with the annual call
for measures.
After consideration of the comments regarding improvement
activities subcategories we are finalizing at Sec. 414.1365 that the
improvement activities performance category will include the
subcategories of activities provided at section 1848(q)(2)(B)(iii) of
the Act. In addition, we are finalizing at Sec. 414.1365 the following
additional subcategories: ``Achieving Health Equity,'' ``Integrated
Behavioral and Mental Health,'' and ``Emergency Preparedness and
Response.''
(7) Improvement Activities Inventory
To implement the MIPS program, we are required to create an
inventory of improvement activities. Consistent with our MIPS strategic
goals, we believe it is important to create a broad list of activities
that can be used by multiple practice types to demonstrate improvement
activities and activities that may lend themselves to being measured
for improvement in future years.
We took several steps to ensure the initial improvement activities
inventory is inclusive of activities in line with the statutory
language. We had numerous interviews with highly performing
organizations of all sizes, conducted an environmental scan to identify
existing models, activities, or measures that met all or part of the
improvement activities performance category, including the patient-
centered medical homes, the Transforming Clinical Practice Initiative
(TCPI), CAHPS surveys, and AHRQ's Patient Safety Organizations. In
addition, we reviewed the CY 2016 PFS final rule with comment period
(80 FR 70886) and the comments received in response to the MIPS and
APMs RFI regarding the improvement activiies performance category. The
improvement activities inventory was compiled as a result of the
stakeholder input, an environmental scan, MIPS and APMs RFI comments,
and subsequent working sessions with AHRQ and ONC and additional
communications with CDC, SAMHSA and HRSA.
Based on the above discussions we established guidelines for
improvement activities inclusion based on one or more of the following
criteria (in any order):
Relevance to an existing improvement activities
subcategory (or a proposed new subcategory);
Importance of an activity toward achieving improved
beneficiary health outcome;
Importance of an activity that could lead to improvement
in practice to reduce health care disparities;
Aligned with patient-centered medical homes;
Representative of activities that multiple MIPS eligible
clinicians or groups could perform (for example, primary care,
specialty care);
Feasible to implement, recognizing importance in
minimizing burden, especially for small practices, practices in rural
areas, or in areas designated as geographic HPSAs by HRSA;
CMS is able to validate the activity; or
Evidence supports that an activity has a high probability
of contributing to improved beneficiary health outcomes.
Activities that overlap with other performance categories were
included if there was a strong policy rationale to include it in the
improvement activities inventory. We proposed to use the improvement
activities inventory for the transition year of MIPS, as provided in
Table H in in the Appendix to this final rule with comment period. For
further description of how MIPS eligible clinicians or groups would be
designated to submit to MIPS for improvement activities, we refer
readers to the proposed rule (81 FR 28177). For all other MIPS eligible
clinicians or groups participating in APMs that would report to MIPS,
this section applies and we also refer readers to the scoring
requirements for these MIPS eligible clinicians or groups in the
proposed rule (81 FR 28234).
We requested comments on the improvement activities inventory and
suggestions for improvement activities for future years as well.
The following is a summary of the comments we received regarding
the statutory requirements for improvement activities related to the
activities that must be specified under the improvement activities
performance category. We refer readers to Table H in in the Appendix to
this final rule with comment period.
General Comments Related to Activities Across More Than One Subcategory
Comment: We received several comments supporting the broad
descriptions provided for activities in the MIPS transition year to
enable MIPS eligible clinicians to effectively and appropriately
implement and report in a manner that best represents their
performance. Other commenters requested more detail about the
methodology used to assign weights to the activities, and questioned
whether CMS intends to develop specifications for activities as it does
for quality measures.
Response: We appreciate the requests to provide further details
around the methodology and specifications for improvement activities.
Under the statute, we may contract with various entities to assist in
identifying activities
[[Page 77191]]
and specifying criteria for the activities. Accordingly, the
methodology we used to assign weights to the activities was to engage
multiple stakeholder groups, including the Centers for Disease Control,
Health Resources and Services Administration, Office of the National
Coordinator for Health Information Technology, SAMHSA, Agency for
Healthcare Research and Quality, Food and Drug Administration, the
Department of Veterans Affairs, and several clinical specialty groups,
small and rural practices and non-patient facing clinicians to define
the criteria and establish weighting for each activity. Activities were
proposed to be weighted as high based on the extent to which they align
with activities that support the patient-centered medical home, since
that is the standard under section 1848(q)(5)(C)(i) of the Act for
achieving the highest potential score for the improvement activities
performance category, as well as with our priorities for transforming
clinical practice. Activities that require performance of multiple
actions, such as participation in the Transforming Clinical Practice
Initiative, participation in a MIPS eligible clinician's state Medicaid
program, or an activity identified as a public health priority (such as
emphasis on anticoagulation management or utilization of prescription
drug monitoring programs) were also proposed to be weighted as high.
Future revisions and specifications to the activities may be provided
through future rulemaking, consistent with the needs and maturation
process of the MIPS program in future years.
Comment: Several commenters supported the proposed list of
activities but recommended that the number of required activities be
reduced and that more activities be highly weighted.
Response: As discussed in section II.E.5.f.(2) of this final rule
with comment period, we have reduced the number of activities that MIPS
eligible clinicians are required to report on to no more than four
medium-weighted activities or two high-weighted activities, or any
combination thereof which would be less than four activities. We are
reducing the number of activities for small practices, practices
located in rural and geographic HPSAs and non-patient facing clinicians
to no more than one high-weighted activities or two medium-weighted
activities to achieve the highest score.
Comment: Some comments recommended assigning a higher weight to
QCDR-related improvement activities and QCDR functions, and one
commenter recommended that use of a QCDR count for several activities.
Response: Participating in a QCDR is not sufficient for
demonstrating performance of multiple improvement activities and we do
not believe at this time it warrants a higher weighting. In addition,
QCDR participation was not proposed as a high-weighted activity
because, while useful for data collection, it is neither critical for
supporting certified patient-centered medical homes nor requires
multiple actions, which are criteria we considered for high-weighting.
We also note that while QCDR participation may not automatically confer
improvement activities performance category credit, it may put MIPS
eligible clinicians in a position to report multiple improvement
activities, since there are several that specifically reference QCDR
participation. We ask that each MIPS eligible clinician or group select
from the broad list of activities that is included in Table H in in the
Appendix to this final rule with comment period.
Comment: One commenter suggested that we list ID numbers for
activities listed in the improvement activities inventory.
Response: We will include IDs in the on-line portal, as well as a
short title.
Comment: Many commenters suggested that we adopt more specialty-
specific activities, citing their belief that many improvement
activities are focused on primary care. The commenters made many
suggestions for specialty-specific activities, including care
coordination, patient safety, and other activities.
Response: There are many future activities that we would like to
develop and consider for inclusion in MIPS, including those specific to
specialties. We intend to take these comments into account in future
rulemaking and as part of the annual call for the subcategory and
activities process that will occur simultaneously with the annual call
for measures. We note that the current improvement activities inventory
does offer activities that can benefit all practice types and we
believe specialists will be able to successfully report under this
performance category.
Comment: One commenter requested that CMS clarify and distinguish
between activities under the direction and ability of a user, as
opposed to activities under the clinical supervision and control of
MIPS eligible clinicians or groups. Another commenter stated that
activities under the improvement activities performance category needed
to reward active participation in an activity rather than rewarding the
MIPS eligible clinicians for being part of an entity that pays for the
activity. For example, the commenter stated that a teaching hospital
might be the awardee in a BPCI contract, but the faculty practice
clinicians are leading the effort to redesign care.
Response: To reward for active participation in an activity rather
than rewarding for being part of an entity that pays for the activity,
we believe that the requirement that the MIPS eligible clinician or
group must actually perform the activity for a continuous 90-day period
addresses that concern since the clinician would need to perform that
activity for that period of time. In the example that the commenter
provided, the practices reporting at the TIN/NPI level would receive
the credit for the improvement activities.
Comment: Some commenters believe that the activities in this
performance category would not lead to improvement.
Response: For the transition year of MIPS, we intend for MIPS
eligible clinicians to focus on achievement of these activities; they
do not need to show that the activity led to improvement. We believe
these activities are important for all MIPS eligible clinicians because
their purpose is to encourage movement toward clinical practice
improvement.
Comment: Another commenter noted that the proposal that MIPS
eligible clinicians are required to consult with clinical decision
support (CDS) under this mandate ``are encouraged'' to select
improvement activities other than those related to the use of CDS. The
commenter suggested that CMS maintain this statement as a
recommendation and not require that a MIPS eligible clinician or group
report another improvement activity if they are participating under the
mandate and report an improvement activity related to CDS.
Response: We would like to note that we encourage MIPS eligible
clinicians or groups who are already required to use AUC (for example,
for advanced imaging) to report an improvement activity other than one
related to appropriate use. We do not mandate any activity that must be
reported. Further, we do not require MIPS eligible clinicians to
consult with CDS. We also do not require that an MIPS eligible
clinician or group report another improvement activity if they are
already participating and reporting on an existing activity related to
CDS.
Comment: One commenter suggested that CMS consider the existing
reporting burdens on hospital-based MIPS eligible clinicians, and
encouraged CMS to work closely with third party recognition programs to
ensure that information on recognized MIPS eligible clinicians can
[[Page 77192]]
be accurately reported directly to CMS and linked to MIPS eligible
clinicians accordingly. Another commenter suggested that CMS ensure
that specifications for improvement activities undergo proper
stakeholder comment, including a public comment period prior to
finalization. A few commenters also requested that CMS allow additional
stakeholder comment on the improvement activities specifications.
Response: We intend to continue assessing hospital based MIPS
eligible clinician's reporting burden under the MIPS program. While the
current activity list is expansive, there remain opportunities to
expand the list further in future years. The current list, however,
does offer activities that can benefit all practice types and we
believe hospital based specialists will be able to successfully report
improvement activities. Additionally, we provided earlier opportunities
for public input and comment on activities as part of both the 2015
MIPS and APM RFI and the 2016 proposed rule.
Comment: Another commenter recommended that CMS change language
regarding the definition of medical homes to those that are
``nationally recognized accredited or certified'' as the commenter
regularly uses certified and accredited interchangeably.
Response: We refer readers to section II.E.5.f. of this final rule
with comment period for discussions on the definition of recognized
certifying or accrediting bodies for patient-centered medical homes.
Comment: One commenter recommended a flexible approach to quality
assessment that emphasizes outcomes of care and that favors continuous
quality improvement methodologies rather than rigid, process-oriented
patient-centered medical home certification models. The commenter
believed that relying on patient-centered medical home certification as
a means of quality assessment runs the risk of practices not actually
realigning efforts to produce higher quality and more cost effective
care.
Response: We refer readers to section II.E.6.a.(4)(c) of this final
rule with comment period where we discuss patient-centered medical home
certification models.
Activities Related to the Patient Safety and Practice Assessment
Subcategory
Comment: We received more than 25 comments requesting changes or
additions to activities under the Patient Safety and Practice
Assessment subcategory. Under this subcategory, several commenters
suggested that CMS consider Maintenance of Certification (MOC) Part IV
participation as an improvement activity in all improvement activities
subcategories, not just the Patient Safety/Practice Assessment
subcategory. Other commenters suggested that Participation in
Maintenance of Certification Part IV should be re-designated as a high
priority. A few commenters also pointed out inconsistencies with
reference to PDMP as a high-weighted activity in this section compared
to what is included in the improvement activities inventory and
requested for the change to a high weight be made for this activity in
the inventory list.
Response: We recognize that some activities may align with more
than one subcategory but have assigned each activity to one and only
one subcategory to minimize confusion and avoid an unwieldy list of too
many or duplicative activities that may be difficult to select from for
the transition year of MIPS. MIPS eligible clinicians may select any
activity across any subcategories to meet the criteria for the
improvement activities performance category. We look forward to working
with stakeholders on activity alignments with subcategories in future
years. We also believe that high weighting should be used for
activities that directly address practice areas with the greatest
impact on beneficiary care, safety, health, and well-being. We have
focused high weighting under the subcategories on those activities. We
do not believe there is an inconsistency as PDMP Consultation is listed
as a high-weighted activity and annual registration in a PDMP is listed
as a medium-weighted activity. We have made a revision in the
Consultation of PDMP activity to further elaborate and explain the
requirements.
Comment: Many commenters suggested that CMS recognize continuing
medical education (CME) activities provided by national recognized
accreditors, completion of other state/local licensing requirements and
providing free care to those in need as improvement activities,
particularly those CME activities that involve assessment and
improvement of patient outcomes or care quality, best practice
dissemination and aid in the application of the ``three aims'' (better
care; healthier people and communities; smarter spending), the National
Quality Standards and the CMS Quality Strategy. The commenters also
recommended that inclusion of surveys or interviewing clinicians to
determine if they have applied lessons learned to their practice for at
least 90 days following an activity should meet compliance
requirements.
Response: We appreciate the suggestions that we grant improvement
activities credit for activities already certified as CME activities,
however, for the transition year of the MIPS program we do not have
sufficient data to identify which CMEs could be included as activities.
We will consider these recommendations for additional activities in
future years as part of the nomination process.
Comment: One commenter recommended that the improvement activities
performance category be used to evaluate what activities, in what
quantity, contribute to increased value and improve quality, and that
CMS avoid using overly prescriptive thresholds or quantities of
activities requirements, such as those used in CPC, that show no
correlation to outcomes, quality, or costs. The commenter suggested
that CMS align its criteria for improvement activities with activities
that are included as components of patient-centered home model. Another
commenter advised significantly reducing process-oriented measures in
the improvement activities performance category and building on
activities that clinicians were already completing, because process-
oriented measures could be perceived as busy work. This commenter also
stated that when relevant improvement activities were not otherwise
available, CMS could reduce the burden by allowing certified
improvement activities as partial or complete satisfaction of
improvement activities requirements.
Response: We believe that MIPS eligible clinicians are dedicated to
the care of beneficiaries and will only attest to activities that they
have undertaken in their practice that follow the specific guideline of
each improvement activity. We note we have not proposed prescriptive
thresholds for activities beyond an attestation that a certain
percentage of patients were impacted by a given activity and that in
establishing the improvement activities performance category we
included activities that align with those patient-centered medical
homes typically perform. We are not reducing process-oriented
improvement activities in this performance category because these were
activities that multiple practices recommended as contributing to
practice improvements. We are also not allowing partial completion of
an activity to count toward the improvement activities score. We refer
readers to section II.E.5.f.(3)(c) of this final rule with comment
period for discussions on how we have reduced
[[Page 77193]]
the number of activities required for the improvement activities
performance category which we believe also addresses burden. In
addition, we would like to explain that the activities in the
improvement activities inventory were identified by different types of
practices such as rural and small practices, as well as large
practices, who indicated these are improvement activities that
clinicians are already performing and believed they should be included
in the improvement activities inventory.
Activities Related to the Population Management Subcategory
Comment: We received more than 10 comments related to the
Population Management subcategory. One commenter expressed support for
the 2014 AHA/ACC/HRS Guideline for the Management of Patients with
Atrial Fibrillation, noting that comprehensive patient education, care
coordination, and appropriate dosing decisions are important for
managing patients on anticoagulants, including warfarin and novel oral
anticoagulants. The commenter also indicated that the use of validated
electronic decision support and clinical management tools, particularly
those that support shared decision making, may benefit all patients
treated with anticoagulants. The commenter recommended that improvement
activities be inclusive of patients treated with all anticoagulants
while recognizing differences in management requirements.
Response: We agree that comprehensive patient education, care
coordination, and appropriate dosing decisions are important for
managing patients on anticoagulants. We acknowledge that that the use
of validated electronic decision support and clinical management tools,
particularly those that support shared decision making, may benefit all
patients treated with anticoagulants. We refer the readers to section
II.E.5.g. of this final rule with comment period for more information
on electronic decision support. We also acknowledge that improvement
activities should be inclusive of patients treated with all
anticoagulants while recognizing differences in management
requirements.
We note that because anticoagulants have been consistently
identified as the most common causes of adverse drug events across
health care settings, the Population Management activity starting with
``Participation in a systematic anticoagulation program (coagulation
clinic, patient self-reporting program, patient self-management program
highlights)'' highlights the importance of close monitoring of Vitamin
K antagonist therapy (warfarin) and the use of other coagulation
cascade inhibitors.
Comment: One commenter suggested adding the NCQA Heart/Stroke
Recognition Program as an activity for the Population Management
subcategory. The commenter expressed their belief that attending an
educational seminar on new treatments that covers medication management
and side effects for cancer treatments such as neutropenia or immune
reactions would improve safety and result in better care for
beneficiaries.
Response: We appreciate this additional recommendation and will
consider it in future years.
Activities Related to the Behavioral Health Subcategory
Comment: We received more than 20 comments related to activities
under the Behavioral Health subcategory. One commenter agreed with our
proposed activity: ``Tobacco use: Regular engagement of MIPS eligible
clinicians or groups in integrated prevention and treatment
interventions, including tobacco use screening and cessation
interventions (refer to NQF #0028) for patients with co-occurring
conditions of behavioral or mental health and at risk factors for
tobacco dependence,'' and in addition, requested that CMS consider
adding features from a successful model such as the Million Hearts
Multidisciplinary Approach to Increase Smoking Cessation Interventions
that was demonstrated in New York City.
Response: We will consider the best way to incorporate additional
smoking cessation efforts in MIPS and our other quality programs in the
future.
Comment: Several commenters requested that CMS expand various
descriptions in the improvement activities inventory list, such as for
the activity ``Participation in research that identifies interventions,
tools or processes that can improve a targeted patient population,'' to
include reference to engagement in federally funded clinical research.
Response: We will take this suggestion into consideration for
future rulemaking.
Activities Under the Expand Practice Access Subcategory
Comment: We received only a few unique comments related to
Expanding Practice Access, most related to telehealth. These commenters
suggested that we consider additional activities under the improvement
activities performance category, potentially including telehealth
services or other activities nominated by MIPS eligible clinicians or
groups. The commenters made specific suggestions ranging from follow-up
inpatient telehealth consultations furnished to beneficiaries in
hospitals or SNFs, office or other outpatient visits to transitional
care management services with high medical decision complexity,
psychoanalysis, and family psychotherapy.
Response: In developing improvement activities, some of the
developer's considerations should include whether the activity is
evidenced based and applicable across service settings, and aligns with
the National Quality Strategy and CMS Quality Strategy. We will take
the commenters' suggestions into account for future rulemaking.
Activities Related to the Beneficiary Engagement Subcategory
Comment: Commenters suggested numerous nomenclatural changes within
the Activities Under Beneficiary Engagement subcategory. For example,
one commenter suggested that we refer to ``clinical registries'' in
general rather than QCDRs, since many MIPS eligible clinicians may
participate in clinical registries without using them for MIPS
participation. Other commenters suggested that we revise the wording of
the proposed activity ``Participation in CMMI models such as Million
Hearts Campaign'' to reflect that this is a model, not a ``campaign,''
and suggested that we include the wording ``standardized treatment
protocols'' in the proposed activity ``Use decision support and
protocols to manage workflow in the team to meet patient needs.'' Other
commenters suggested changes to the activities labels in Table H in in
the Appendix to this final rule with comment period.
Response: We have revised the wording of the Million Hearts
activity to read ``Participation in CMMI models such as the Million
Hearts Cardiovascular Risk Reduction Model.'' In addition, we have
revised the decision support activity to read ``Use decision support
and standardized, evidence-based treatment protocols to enhance
effective workflow in the team to meet patient needs.''
Comment: Another commenter expressed concern that the proposed
activity ``Use tools to assist patients in assessing their need for
support for self-management (for example, the Patient Activation
Measure or How's My Health)'' mentioned the Patient Activation Measure,
which the commenter stated was proprietary and expensive if widely
used. The commenter recommended that we
[[Page 77194]]
consider the variety of psychometric tools that can be used to measure
not only patient motivation, but also confidence and intent to act. The
commenter stated that for example, specifically calling out activation
inhibits health behavior change innovation. The commenter stated that
it is possible to measure the burden of patient symptoms by using
instruments like impact index assessments. The commenter further stated
that asking patients about how much they are bothered by their symptoms
can help healthcare professionals assess the quality of life a patient
is experiencing.
Response: We recognize that the Patient Activation Measure (PAM)
survey is proprietary and does require an investment on the practices'
part if they choose to utilize it. However, in the activity noted above
related to PAM, we explain that this is an example of a tool that could
be used. Other tools to assist patients in assessing their need for
support for self-management would be acceptable for this activity.
Comment: Some commenters questioned whether a Million Hearts award
received in prior years can count for improvement activities credit as
prior awardees are not allowed to compete again. The commenters
suggested that prior year awards should count for improvement
activities credit and bonus points as well.
Response: We recognize the importance of the Million Hearts
Cardiovascular Risk Reduction Model and have included that activity in
the improvement activities inventory. All activities within the
improvement activities inventory, however, must be performed for a
continuous 90-day period that must occur within the performance period.
Activities Related to the Emergency Preparedness and Readiness
Subcategory
Comment: Some commenters noted that the Emergency Response and
Preparedness subcategory was the only subcategory with no high-weighted
activities and several asked for more high-weighted activities.
Response: We are changing one existing activity in the Emergency
Response and Preparedness Subcategory ``Participation in domestic or
international humanitarian volunteer work. MIPS eligible clinicians and
groups must be registered for a minimum of 6 months as a volunteer for
domestic or international humanitarian volunteer work'' to a high-
weighted activity that is ``Participation in domestic or international
humanitarian volunteer work. Activities that simply involve
registration are not sufficient. MIPS eligible clinicians must attest
to domestic or international humanitarian volunteer work for a period
of a continuous 60 days or greater.'' We have changed this activity
from requiring being registered for 6 months to participating for 60
days to be in line with our overall new performance period policy which
only requires a 90-day period. The 60-day participation would fall
within that new 90-day window. We are also changing this to a high-
weighted activity because such volunteer work is intensive, often
involves travel and working under challenging physical and clinical
circumstances. Table H in in the Appendix to this final rule with
comment period reflects this revised description of the existing
activity and revised weighting.
Comment: One commenter recommended the exclusion of ``Participation
in domestic or international humanitarian volunteer work'' activity,
stating that it is unlikely to lead to improvements in the quality or
experience of care for a MIPS eligible clinician's patients. Another
commenter expressed concern that their patient satisfaction ratings
will suffer because they are actively attempting to reduce prescription
drug overdoses. The commenter suggested removing the patient
satisfaction component.
Response: We disagree that this activity is unlikely to improve
quality of care. Caring for injured and medically unwell patients
during disasters is widely described by the generations of clinicians
who have volunteered for these efforts as an excellent learning
experience and that their volunteer work improved their clinical skills
in their routine practice upon their patients. We believe that
``Participation in domestic or international humanitarian volunteer
work'' will have a similar positive impact for MIPS eligible clinicians
and their patients.
Comment: A few commenters believed that the Congress expressly
defined remote monitoring and telehealth as a component of care
coordination in improvement activities and understood the vital role of
personal connected health in delivery of high quality clinical
practice. The commenters suggested that CMS modify improvement
activities in a manner that would reflect statutory language and
provide incentive for the conduct of improvement activities using
digital, interoperable communications.
Response: We have provided appropriate incentives through other
performance categories aligned with the policy goals for
interoperability of EHRs and for achieving widespread exchange of
health information. We also note the statutory example of ``use of
remote monitoring or telehealth)'' in several activities, which include
under the Care Coordination subcategory, ``Ensuring that there is
bilateral exchange of necessary patient information to guide patient
care that could include participating in a Health Information
Exchange.'' This would require interoperable communications. Under the
Population Management subcategory, we provide incentive for using
remote monitoring or telehealth through the activity related to Oral
Vitamin K antagonist therapy (warfarin) that includes, for rural or
remote patients, that they can be managed using remote monitoring or
telehealth options.
Comment: Other commenters supported the MIPS program in including
improvement activities as a new performance category for clinician
performance, particularly incentivizing the use of health IT,
telehealth and connection of patients to community-based services. In
addition, specifically for the improvement activities performance
category activities regarding connections to community-based services
and the use of health IT and telehealth, the commenters supported CMS
increasing their weight by rating them as ``high'' in the final rule
with comment period.
Response: We believe that high weighting should be used for
activities that directly address areas with the greatest impact on
beneficiary care, safety, health, and well-being. We have focused high
weighting under the subcategory on those activities.
Comment: Another commenter recommended that we enhance the clarity
of the improvement activities definitions in the final rule with
comment period and with subregulatory guidance so that MIPS eligible
clinicians know what they must do to qualify for a given improvement
activity. For example, where a general and non-specific definition is
intentional to permit clinicians flexibility, commenter requested that
CMS define expectations on how MIPS eligible clinicians can meet and
substantiate such an improvement activity requirement and specify the
evidence that MIPS eligible clinicians would be expected to retain as
documentation for a potential audit including documentation for non-
percentage-based measures. The commenter stated their concern that,
given short and ambiguous definitions in Table H in in the Appendix to
this final rule with comment period, clinicians may avoid a given
[[Page 77195]]
improvement activity based on varied understandings of what satisfying
the activity entails.
Response: MIPS eligible clinicians may retain any documentation
that is consistent with the actions they took to perform each activity.
We also note that any MIPS eligible clinician may report on any
activity; for example, a cardiologist may choose to select an
improvement activity related to an emergency response and preparedness,
if applicable. We will provide MIPS eligible clinicians more
information about documentation expectations for the transition year of
MIPS in subregulatory guidance.
Activities Related to the Health Equity Subcategory
Comment: We received over 10 comments related to activities under
Health Equity. One commenter recommended that we add an activity that
encourages referrals to a clinical trial for a minority population.
Another commenter requested inclusion of an established health equity
council. Another commenter supported a Promoting Health Equity and
Continuity subcategory, and recommended including the Bravemann et al.
definition of health equity and the Tool for Health and Resilience in
Vulnerable Environments or THRIVE framework.
Response: We will consider these recommendations in future years as
part of the nomination process.
Activities Related to the Care Coordination Subcategory
Comment: We received at least 10 comments related to Care
Coordination activities. One commenter recommended that we expand the
subset of activities listed for the Care Coordination subcategory in
the improvement activities inventory list to include long-term services
and supports. Another commenter supported our proposal to retain the
activities related to care management and individualized plans of care
in the proposed improvement activities inventory, and refine these
activities over time by incorporating the concept of principles of
person-centered care to coordinate care and identifying, tracking and
updating individual goals as they relate to the care plan. One
commenter recommended that participation in a Rural Health Innovation
Collaborative (RHIC) count as an improvement activity since RHIC are
recognized by Congress as organizations that can give technical support
to small practices, rural practices, and areas experiencing a shortage
of clinicians.
Response: We will work with stakeholders as part of the future
nomination process to identify additional activities.
After consideration of the comments regarding the improvement
activities inventory, we are finalizing the improvement activities and
weighting provided in Table H in the Appendix to this final rule with
comment period as proposed with the exception of the following: One
change for one activity in the Emergency Response and Preparedness
Subcategory from a medium to a high-weighted activity; one change for
one activity in the Population Management Subcategory from a medium to
a high-weighted activity; we have included the addition of an asterisk
(*) in Table H in the Appendix to this final rule with comment period,
next to activities that also qualify for the advancing care information
bonus, and refer readers to section II.E.6.a.(5) of this final rule
with comment period. We also included language, elaborating on the
requirements for the Consultation PDMP activity. We are correcting the
reference to Million Hearts Cardiovascular Risk Reduction Model instead
of describing it as a ``campaign;'' and revising the wording of the
proposed activity ``Use decision support and protocols to manage
workflow in the team to meet patient needs'' to read ``Use decision
support and standardized treatment protocols to manage workflow in the
team to meet patient needs;'' and ``removing the State Innovation Model
participation activity.'' Our reasoning for these changes is to
alleviated confusion related to the activity based on comments, to
correct a previous incorrect term such as the use of the word
``campaign'' or as a result of some other change in another section of
the final rule with comment period, specifically inclusion of
qualifying improvement activities for the advancing care information
bonus. Our reasoning for changing the CAHPS for MIPS survey weighting
to high is because the CAHPS for MIPS survey will be optional for large
groups under the quality performance category and we want to encourage
use of this survey. Another contributing element was the need to ensure
options beyond the CAHPS for MIPS survey were available to provide
credit for surveying and for CAHPS that did not meet thresholds/
standards for reporting in measure category (largely because they did
not have enough beneficiaries). Our reasoning for removing the State
Innovation Model (SIM) activity is that SIM is a series of a different
agreements between CMS and states. Clinicians are not direct
participants. In addition, we do not collect TIN/NPI combinations, so
there is no way to validate participation based on attestation. Our
reasoning for changing the weighting on the Emergency Response and
Preparedness activity is that this improvement activity requires the
clinician pay out of pocket to travel and do volunteer work (personal
costs/risks), likely contributing some donated medical durables/
expendables (practice material resources). In addition, the clinician
also misses scheduled appointments with patients (foregoing practice
financial revenue). Our reasoning for changing the weighting on the
Population Management activity is that this improvement activity is
consistent with section 1848(q)(2)(B)(iii) of the Act, which requires
the Secretary to give consideration to the circumstances of practices
located in rural areas and geographic HPSAs. Rural health clinics would
be included in that definition for consideration of practices in rural
areas. All of these changes are reflected in Table H in the Appendix to
this final rule with comment period.
(a) CMS Study on Improvement Activities and Measurement
(1) Study Purpose
Previous experience with the PQRS, VM, and Medicare EHR Incentive
programs have shown that many clinicians have errors within their data
sets, as well as problems in understanding and choosing the data that
corresponds to their selected quality measures. In CMS' quest to create
a culture of improvement using evidence based medicine on a consistent
basis, fully understanding the strengths and limitations of the current
processes is crucial to better understand the current processes, we
proposed to conduct a study on clinical improvement activities and
measurement to examine clinical quality workflows and data capture
using a simpler approach to quality measures.
The lessons learned in this study on practice improvement and
measurement may influence changes to future MIPS data submission
requirements. The goals of the study are to see whether there will be
improved outcomes, reduced burden in reporting, and enhancements in
clinical care by selected MIPS eligible clinicians desiring:
A more data driven approach to quality measurement.
Measure selection unconstrained by a CEHRT program or
system.
[[Page 77196]]
Improving data quality submitted to CMS.
Enabling CMS get data more frequently and provide feedback
more often.
(2) Study Participation Credit and Requirements: Study Participation
Eligibility
This present study will select 10 non-rural individual MIPS
eligible clinicians or groups of less than three non-rural MIPS
eligible clinicians, 10 rural individual MIPS eligible clinicians or
groups of less than three rural MIPS eligible clinician's, 10 groups of
three to eight MIPS eligible clinicians, five groups of nine to 20 MIPS
eligible clinicians, three groups of 21 to 100 MIPS eligible
clinicians, two groups of greater than 100 MIPS eligible clinicians,
and two specialist groups of MIPS eligible clinicians. Participation
would be open to a limited number of MIPS eligible clinicians in rural
settings and non-rural settings. A rural area is defined at Sec.
414.1305 and a non-rural area would be any MIPS eligible clinicians or
groups not included as part of the rural definition. MIPS eligible
clinicians and groups would need to sign up from January 1, 2017, to
January 31, 2017. The sign up process will utilize a web-based
interface. Participants would be approved on a first come first served
basis and must meet all the required criteria. Selection criteria will
also be based on different states and also within different clinician
settings that falls in the participation eligibility criteria.
MIPS eligible clinicians and groups in the CMS study on practice
improvement and measurement will receive full credit (40 points) for
the improvement activities performance category of MIPS after
successfully electing, participating and submitting data to the study
coordinators at CMS for the full calendar year.
(3) Procedure
Based on feedback and surveys from MIPS eligible clinicians, study
measurement data will be collected at baseline and at every three
months (quarterly basis) afterwards for the duration of the calendar
year. Study participants who can submit data on a more frequent basis
will be encouraged to do so.
Participants will be required to attend a monthly focus group to
share lessons learned along with providing survey feedback to monitor
effectiveness. The focus group would also include providing visual
displays of data, workflows, and best practices to be shared amongst
the participants to obtain feedback and make further improvements. The
monthly focus groups would be used to learn from the practices on how
to be more agile as we test new ways of measure recording and workflow.
For CY 2017, the participating MIPS eligible clinicians or groups
would submit their data and workflows for a minimum of three MIPS CQMs
that are relevant and prioritized by their practice. One of the
measures must be an outcome measure, and one must be a patient
experience measure. The participating MIPS eligible clinicians could
elect to report on more measures as this would provide more options
from which to select in subsequent years for purposes of measuring
improvement.
If MIPS eligible clinicians or groups calculate the measures
working with a QCDR, qualified registry, or CMS-approved third party
intermediary, we would use the same data validation process described
in the proposed rule (81 FR 28279). We would only collect the numerator
and denominator for the measures selected for the overall population,
all patients/all payers. This would enable the practices to build the
measures based on what is important for their area of practice while
increasing the quality of care.
The first round of the study will last for 1 year after which new
participants will be recruited. Participants electing to continue in
future years would be afforded the opportunity to opt-in or opt-out
following the successful submission of data to us. The first
opportunity to continue in the study would be at the end of the 2017
performance period. Eligible clinicians who elect to join the study but
fail to participate in the study requirements and/or fail to
successfully submit the data required will be removed from the study.
Unsuccessful study participants will then be subject to the full
requirements for the improvement activities performance category.
In future years, participating MIPS eligible clinicians or groups
would select three of the measures for which they have baseline data
from the 2017 performance period to compare against later performance
years.
We requested comment on the study and welcome suggestions on future
study topics.
The following is a summary of the comments we received regarding
the CMS study on improvement activities and measurement.
Comment: Commenters recommended that CMS monitor performance of the
activities by the various MIPS eligible clinicians and groups for
trends and consider whether activities result in better outcomes.
Response: We will consider these issues as we develop the study.
Comment: Some commenters supported CMS' proposal to conduct a study
on improvement activities and measurement, in general, to examine
clinical quality workflows and data capture using a simpler approach to
quality measures. The commenters believed that CMS proposes an
appropriate incentive by allowing a limited number of selected
clinicians and groups to receive full credit (60 points) for the
improvement activities performance category if they participate in the
study. However, the commenters recommended that CMS expand this
opportunity so that it is available to a broader and more diverse swath
of practices, including emergency medicine practices. Other commenters
supported our plans to conduct an annual call for activities to build
the improvement activities inventory and our plans to study
measurement, workflow, and current challenges for clinical practices.
The commenters suggested that we ensure that we study a diverse range
of participants when conducting that analysis.
Response: We plan to expand as we learn from the initial study,
which is currently open to all types of practices. We acknowledge that
there are many variables affecting measurement and will continue to
make sure we look at this diversification as we study different methods
of measurement.
Comment: One commenter was concerned about the study and wanted to
know if CMS expects vendors to develop EHR workflows and reports for
study measures and if vendors would be expected to support the study's
requirements for more frequent data submission.
Response: We will work with these vendors and others as the study
evolves. We note that for this study, we will use measures that already
exist in programs, so that new development is required for technical
workflows or documentation requirements for those products included on
the ONC certified health IT product list (CHPL).
Comment: Another commenter agreed that improvement activities study
participants should receive full credit for improvement activities and
that those participants that do not adhere to the study guidelines
should be removed and subject to typical improvement activities
requirements. This commenter recommended that CMS provide a final date
by which it plans to make these exclusion determinations and that after
this date, CMS can work with the ex-
[[Page 77197]]
participant to help them complete the year. They also recommended that
all participants who get excluded from the study not be allowed to
participate in the study the following year.
Response: We will work with stakeholders to further define future
participation requirements as this study evolves.
After consideration of the comments regarding the CMS study on
improvement activities and measurement we are finalizing the policies
with the exception that successful participation in the pilot would
result in full credit for the improvement activities performance
category of 40 points, not 60 points, in accordance with the revised
finalized scoring. If participants do not meet the study guidelines
they will be removed from the study and need to follow the current
improvement activities guidelines.
(8) Improvement Activities Policies for Future Years of the MIPS
Program
(a) Proposed Approach for Identifying New Subcategories
We proposed, for future years of MIPS, to consider the addition of
a new subcategory to the improvement activities performance category
only when the following criteria are met:
The new subcategory represents an area that could
highlight improved beneficiary health outcomes, patient engagement and
safety based on evidence.
The new subcategory has a designated number of activities
that meet the criteria for an improvement activity and cannot be
classified under the existing subcategories.
Newly identified subcategories would contribute to
improvement in patient care practices or improvement in performance on
quality measures and cost performance categories.
In future years, MIPS eligible clinicians or groups would have an
opportunity to nominate additional subcategories, along with activities
associated with each of those subcategories that are based on criteria
specified for these activities, as discussed in the proposed rule. We
requested comments on this proposal.
We did not receive any comments regarding policies for identifying
new improvement activities subcategories in future years of the MIPS
program. We therefore are finalizing the addition of a new subcategory
to the improvement activities performance category only when the
following criteria are met:
The new subcategory represents an area that could
highlight improved beneficiary health outcomes, patient engagement and
safety based on evidence.
The new subcategory has a designated number of activities
that meet the criteria for an improvement activity and cannot be
classified under the existing subcategories.
Newly identified subcategories would contribute to
improvement in patient care practices or improvement in performance on
quality measures and cost performance categories.
(b) Request for Comments on Call for Measures and Activities Process
for Adding New Activities
We plan to develop a call for activities process for future years
of MIPS, where MIPS eligible clinicians or groups and other relevant
stakeholders may recommend activities for potential inclusion in the
improvement activities inventory. As part of the process, MIPS eligible
clinicians or groups would be able to nominate additional activities
that we could consider adding to the improvement activities inventory.
The MIPS eligible clinician or group or relevant stakeholder would be
able to provide an explanation of how the activity meets all the
criteria we have identified. This nomination and acceptance process
would, to the best extent possible, parallel the annual call for
measures process already conducted by CMS for quality measures. The
final improvement activities inventory for the performance year would
be published in accordance with the overall MIPS rulemaking timeline.
In addition, in future years we anticipate developing a process and
establishing criteria to remove or add new activities to improvement
activities performance category.
Additionally, prospective activities that are submitted through a
QCDR could also be included as part of a beta-test process that may be
instrumental for future years to determine whether that activity should
be included in the improvement activities inventory based on specific
criteria noted above. MIPS eligible clinicians or groups that use QCDRs
to capture data associated with an activity, for example the frequency
in administering depression screening and a follow-up plan, may be
requested to voluntarily submit that same data in year 2 to begin
identifying a baseline for improvement for subsequent year analysis.
This is not intended to require any MIPS eligible clinician or group to
submit improvement activities only via QCDR from 1 year to the next or
to require the same activity from 1 year to the next. Participation in
doing so, however, can help to identify how activities can contribute
to improve outcomes. This data submission process will be considered
part of a beta-test to: (1) Determine if the activity is being
regularly conducted and effectively executed and (2) if the activity
warrants continued inclusion on the improvement activities inventory.
The data would help capture baseline information to begin measuring
improvement and inform the Secretary of the likelihood that the
activity would result in improved outcomes. If an activity is submitted
and reported by a QCDR, it would be reviewed by us for final inclusion
in the improvement activities inventory the following year, even if
these activities are not submitted through the future call for measures
and activities process. We intend, in future performance years, to
begin measuring improvement activities data points for all MIPS
eligible clinicians and to award scores based on performance and
improvement. We solicited comment on how best to collect such
improvement activities data and factor it into future scoring under
MIPS.
We requested comments on these approaches and on any other
considerations we should take into account when developing these type
of approaches for future rulemaking.
The following is summary of the comments we received regarding
improvement activities policies for identifying new improvement
activities in future years of the MIPS program.
Comment: Some commenters recommended that CMS limit participants
from reporting on the same activity over several performance periods in
future years.
Other commenters recommended that CMS allow MIPS eligible
clinicians to maintain improvement activities over time and opposed CMS
proposals to have more stringent requirements. These commenters were
concerned that by imposing limits on frequency of reporting of the same
activity over several years, CMS would be encouraging practices to
implement temporary instead of permanent improvements and would risk
creating short-lived activities that lack consistency across time,
which is not beneficial to patients and is confusing and disruptive to
MIPS eligible clinicians' workflow.
A few commenters recommended that CMS permit MIPS eligible
clinicians to select from a wide range of improvement activities, allow
MIPS eligible clinicians to perform them in a way that is effective and
reasonable for both the MIPS eligible clinicians and their patient
population, and refrain from imposing restrictive specifications
[[Page 77198]]
regarding how MIPS eligible clinicians document and report their
activities. One commenter suggested that CMS keep the broad list of
improvement activities and publish additional detail through non-
binding clarification or guidance, rather than in regulatory text,
which may limit innovation and flexibility.
Response: We recognize that some activities may be improved upon
over time which would support reporting on the same activity across
multiple performance periods. We also note that other activities, such
as providing 24/7 access may provide limited opportunity to demonstrate
improvement over time and would minimize the value of reporting this
same activity over subsequent years. We will consider this for future
rulemaking. It is our intention to continue to allow MIPS eligible
clinicians to select from a wide range of improvement activities, allow
MIPS eligible clinicians to perform them in a way that is effective and
reasonable for both the MIPS eligible clinicians and their patient
population, and refrain from imposing restrictive specifications
regarding how MIPS eligible clinicians document and report their
activities. In addition, we intend to keep the broad list of
improvement activities and publish additional detail through non-
binding clarification or guidance as we are able.
Comment: Other commenters suggested that in the future, CMS
evaluate whether: (1) Improvement activities should be worth more than
15 percent of the final score; (2) individual activity weights should
be increased; the number and type of MIPS eligible clinicians reporting
on health equity improvement activities should be changed; (3) how
performance on health equity improvement activities correlates with
quality performance; (4) whether improvement activities result in
better outcomes; and (5) what additional improvement activities should
be included in MIPS. Some commenters suggested that some activities in
the improvement activities performance category require considerable
additional resources, and may warrant more points than 20--the proposed
standard for ``high.'' Other commenters expressed concern about the
proposed scoring for improvement activities, noting that the category
is a new one that has not been implemented in previous programs and
that activities may favor outpatient primary care.
Response: We intend to consider these comments in future
rulemaking, and will monitor MIPS eligible clinicians' performance in
the improvement activities performance category carefully to inform
those policy decisions. We welcome commenters' specific suggestions for
additional activities or activities that may merit additional points
beyond the ``high'' level we are adopting in the future. We refer
readers to the section II.E.6. of this final rule with comment period
for additional discussion of the public comments that we received on
the MIPS program's scoring methodology.
Comment: A few commenters agreed with the proposal that future
scores for improvement activities should be based on outcomes and
improvement. The commenters believed that MIPS eligible clinicians
engaged in improvement activities should submit quality measures that
reflect the focus of their improvement activities and demonstrate the
quality improvement by engaging in those improvement activities. Other
commenters suggested that we use improvement activities as a test bed
for innovation to identify how activities could lead to improved
outcomes and readiness for APM participation. The commenters encouraged
collaboration with specialty physicians, medical societies, and other
stakeholders to evaluate improvement activities continually.
Response: We will take the commenter's suggestion that we should
more closely link measures selected under the quality performance
category with activities selected under the improvement activities
performance category into consideration in the future. We note that for
the transition year of MIPS, we believe we should provide MIPS eligible
clinicians with flexibility in selecting measures and activities that
are relevant to their practices.
We intend to monitor MIPS eligible clinicians' participation in
improvement activities carefully, and as the commenters suggested, we
will continue examining potential relationships to quality measurement,
advancing care information measures leveraging CEHRT, and APM
participation readiness. We intend to continue collaborating with
specialty clinicians, medical societies, and other stakeholders when
conducting these evaluations.
Comment: Some commenters opposed adding additional measurement and
reporting requirements for improvement activities in future years and
stated that this would increase MIPS eligible clinician burden and is
not in line with CMS's objective to simplify MIPS. The commenters
suggested that CMS view the improvement activities inventory as fluid
and to formalize a standard process to add new activities each year.
Response: We will take these comments into account as we consider
improvement activities policy for future program years. Our intent,
however, is to minimize burden on MIPS eligible clinicians. We will
consider whether or not we should adopt a standard process for adding
activities in the future.
Comment: Some commenters recommended that CMS allow MIPS eligible
clinicians or groups to nominate additional activities that CMS would
consider adding to the improvement activities inventory. Specifically,
they recommended that CMS draw upon working sessions with groups such
as AHRQ, ONC, HRSA, and other federal agencies to create a patient-
generated health data framework which would seek to identify best
practices, gaps, and opportunities for progress in the collection and
use of health data for research and care delivery.
Response: We intend to follow a similar process that is now
employed in the annual Call for Measures for changes in the improvement
activities inventory. It is important to keep in mind that in
developing activities, some of the developer's considerations should
include whether the activity is evidenced based and applicable across
service settings and aligns with the National Quality Strategy and CMS
Quality Strategy.
Comment: Several commenters stated, as CMS implements new
improvement activities in future years, the commenters were in support
of a process similar to the current CMS Call for Quality Measures and
recommended that CMS clearly communicate the timelines and requirements
to the public early and often to allow for the preparation of
submissions.
Response: Our intent is to proceed with this process for the
transition year of MIPS.
Comment: A few commenters expressed concern about program
requirements for MIPS eligible clinicians reporting as a group and
future changes in the program. The commenters also requested more
direction regarding documentation to maintain for these activities in
the event of an audit.
Response: We will verify data through the data validation and audit
process as necessary. MIPS eligible clinicians may retain any
documentation that is consistent with the actions they took to perform
each activity.
Comment: Other commenters proposed that CMS allow, for the
improvement activities performance category, that individual activities
may be pursued by an individual MIPS eligible clinician for up to 3
years, but
[[Page 77199]]
that following this period, MIPS eligible clinicians be required to
select a different area of focus.
Response: We will consider this in the future.
Comment: One commenter supported CMS's proposal to study workflow
and data capture to understand the limitations. This commenter
encouraged CMS to include MIPS eligible clinicians from specialty
behavioral health organizations as part of this study.
Response: We will work with key stakeholders on the workflow and
data capture for better understanding of how to measure improvement of
activities.
Comment: Some commenters expressed support for the approach for
identifying new subcategories and activities in the future and one
suggested that CMS develop a template designed to ensure that proposed
improvement activities are clearly measurable and also that the
``value'' of the improvement activity can be related to an existing
improvement activity.
Response: We will work with stakeholders to further refine this
approach for future consideration.
Comment: Another commenter suggested rather than looking to
restrictions on the use of QDCRs as improvement activities, in future
years, we should include an assessment of how well an improvement
activity was accomplished, including demonstration of resulting
improvements in outcomes and/or patient experience from the improvement
activity. This commenter believed that we should take this more
positive approach to ensure improvement activities are being effective
rather than trying to determine whether the clinician is using a QCDR
to achieve ``too many'' improvement activities.
Response: We will work with the stakeholder community in future
years for how this could be best addressed.
Comment: One commenter was concerned that MIPS did not recognize
practices are likely to develop multi-year improvement strategies and
that removal of an approved improvement activity in the annual update
would undermine program stability. To address this concern, this
commenter recommended that improvement activity topics identified for
termination should be allowed to continue for the transition year
beyond initial notification to allow for sufficient notice to
participating practices.
Response: We will work with the stakeholder community in future
years to best determine how to maintain the annual activity list.
We will take the comments regarding improvement activities policies
for identifying new improvement activities in future years of the MIPS
program into consideration for future rulemaking.
(c) Request for Comments on Use of QCDRs for Identification and
Tracking of Future Activities
In future years, we expect to learn more about improvement
activities and how the inclusion of additional measures and activities
captured by QCDRs could enhance the ability of MIPS eligible clinicians
or groups to capture and report on more meaningful activities. This is
especially true for specialty groups. In the future, we may propose use
of QCDRs for identification and acceptance of additional measures and
activities which is in alignment with section 1848(q)(1)(E) of the Act
which encourages the use of QCDRs, as well as under section
1848(q)(2)(B)(iii)(II) of the Act related to the population management
subcategory. We recognize, through the MIPS and APMs RFI comments and
interviews with organizations that represent non-patient facing MIPS
eligible clinicians or groups and specialty groups that QCDRs may
provide for a more diverse set of measures and activities under
improvement activities than are possible to list under the current
improvement activities inventory. This diverse set of measures and
activities, which we can validate, affords specialty practices
additional opportunity to report on more meaningful activities in
future years. QCDRs may also provide the opportunity for longer-term
data collection processes which will be needed for future year
submission on improvement, in addition to achievement. Use of QCDRs
also supports ongoing performance feedback and allows for
implementation of continuous process improvements. We believe that for
future years, QCDRs would be allowed to define specific improvement
activities for specialty and non-patient facing MIPS eligible
clinicians or groups through the already-established QCDR approval
process for measures and activities. We requested comments on this
approach. We did not receive any comments regarding the use of QCDRs
for identification and tracking of future activities.
(d) Request for Comments on Activities That Will Advance the Usage of
Health IT
The use of health IT is an important aspect of care delivery
processes described in many improvement activities. In this final rule
with comment period we have finalized a policy to allow MIPS eligible
clinicians to achieve a bonus in the advancing care information
performance category when they use functions included in CEHRT to
complete eligible activities from the improvement activities inventory.
Please refer to section II.E.5.g. of this final rule with comment
period for details on how improvement activities using CEHRT relate to
the objectives and measures of the advancing care information and
improvement activities performance categories.
In addition to those functions included under the CEHRT definition,
ONC certifies technology for additional emerging health IT capabilities
which may also be important for enabling activities included in the
improvement activities inventory, such as technology certified to
capture social, psychological, and behavioral data according to the
criterion at 80 FR 62631, and technology certified to generate and
exchange an electronic care plan (as described at 80 FR 62648). In the
future, we may consider including these emerging certified health IT
capabilities as part of activities within the improvement activities
inventory. By referencing these certified health IT capabilities in
improvement activities, clinicians would be able to earn credit under
the improvement activities performance category while gaining
experience with certification criteria that may be reflected as part of
the CEHRT definition at a later time. Moreover, health IT developers
will be able to innovate around these relevant standards and
certification criteria to better serve clinicians' needs.
We invite comments on this approach to encourage continued
innovation in health IT to support improvement activities.
g. Advancing Care Information Performance Category
(1) Background and Relationship to Prior Programs
(a) Background
The American Recovery and Reinvestment Act of 2009 (ARRA), which
included the Health Information Technology for Economic and Clinical
Health Act (HITECH Act), amended Titles XVIII and XIX of the Act to
authorize incentive payments and Medicare payment adjustments for EPs
to promote the adoption and meaningful use of CEHRT. Section 1848(o) of
the Act provides the statutory basis for the Medicare incentive
payments made to meaningful EHR users. Section 1848(a)(7) of the Act
also establishes downward payment adjustments, beginning with CY 2015,
for EPs who
[[Page 77200]]
are not meaningful users of CEHRT for certain associated EHR reporting
periods. (For a more detailed explanation of the statutory basis for
the Medicare and Medicaid EHR Incentive Programs, see the July 28, 2010
Stage 1 final rule titled, ``Medicare and Medicaid Programs; Electronic
Health Record Incentive Program; Final Rule'' (75 FR 44316 and 44317).)
A primary policy goal of the EHR Incentive Program is to encourage
and promote the adoption and use of CEHRT among Medicare and Medicaid
health care providers to help drive the industry as a whole toward the
use of CEHRT. As described in the final rule titled ``Medicare and
Medicaid Programs; Electronic Health Record Incentive Program--Stage 3
and Modifications to Meaningful Use in 2015 Through 2017'' (hereinafter
referred to as the ``2015 EHR Incentive Programs final rule'') (80 FR
62769), the HITECH Act outlined several foundational requirements for
meaningful use and for EHR technology. CMS and ONC have subsequently
outlined a number of key policy goals which are reflected in the
current objectives and measures of the program and the related
certification requirements (80 FR 62790). Current Medicare EP
performance on these key goals is varied, with EPs demonstrating high
performance on some objectives while others represent a greater
challenge.
(b) MACRA Changes
Section 1848(q)(2)(A) of the Act, as added by section 101(c) of the
MACRA, includes the meaningful use of CEHRT as a performance category
under the MIPS, referred to in the proposed rule and in this final rule
with comment period as the advancing care information performance
category, which will be reported by MIPS eligible clinicians as part of
the overall MIPS program. As required by sections 1848(q)(2) and (5) of
the Act, the four performance categories shall be used in determining
the MIPS final score for each MIPS eligible clinician. In general, MIPS
eligible clinicians will be evaluated under all four of the MIPS
performance categories, including the advancing care information
performance category. This includes MIPS eligible clinicians who were
not previously eligible for the EHR Incentive Program incentive
payments under section 1848(o) of the Act or subject to the EHR
Incentive Program payment adjustments under section 1848(a)(7) of the
Act, such as physician assistants, nurse practitioners, clinical nurse
specialists, certified registered nurse anesthetists, and hospital-
based EPs (as defined in section 1848(o)(1)(C)(ii) of the Act).
Understanding that these MIPS eligible clinicians may not have prior
experience with CEHRT and the objectives and measures under the EHR
Incentive Program, we proposed a scoring methodology within the
advancing care information performance category that provides
flexibility for MIPS eligible clinicians from early adoption of CEHRT
through advanced use of health IT. In the proposed rule (81 FR 28230
through 28233), we also proposed to reweight the advancing care
information performance category to zero in the MIPS final score for
certain hospital-based and other MIPS eligible clinicians where the
measures proposed for this performance category may not be available or
applicable to these types of MIPS eligible clinicians.
(c) Considerations in Defining Advancing Care Information Performance
Category
In implementing MIPS, we intend to develop the requirements for the
advancing care information performance category to continue supporting
the foundational objectives of the HITECH Act, and to encourage
continued progress on key uses such as health information exchange and
patient engagement. These more challenging objectives are essential to
leveraging CEHRT to improve care coordination and they represent the
greatest potential for improvement and for significant impact on
delivery system reform in the context of MIPS quality reporting.
In developing the requirements and structure for the advancing care
information performance category, we considered several approaches for
establishing a framework that would naturally integrate with the other
MIPS performance categories. We considered historical performance on
the EHR Incentive Program objectives and measures, feedback received
through public comment, and the long term goals for delivery system
reform and quality improvement strategies.
One approach we considered would be to maintain the current
structure of the Medicare EHR Incentive Program and award full points
for the advancing care information performance category for meeting all
of the objectives and measures finalized in the 2015 EHR Incentive
Programs final rule, and award zero points for failing to meet all of
these requirements. This method would be consistent with the current
EHR Incentive Program and is based on objectives and measures already
established in rulemaking. However, we considered and dismissed this
approach as it would not allow flexibility for MIPS eligible clinicians
and would not allow us to effectively measure performance for MIPS
eligible clinicians in the advancing care information performance
category who have taken incremental steps toward the use of CEHRT, or
to recognize exceptional performance for MIPS eligible clinicians who
have excelled in any one area. This is particularly important as many
MIPS eligible clinicians may not have had past experience relevant to
the advancing care information performance category and use of EHR
technology because they were not previously eligible to participate in
the Medicare EHR Incentive Program. This approach also does not allow
for differentiation among the objectives and measures that have high
adoption and those where there is potential for continued advancement
and growth.
We subsequently considered several methods which would allow for
more flexibility and provide CMS the opportunity to recognize partial
or exceptional performance among MIPS eligible clinicians for the
measures under the advancing care information performance category. We
decided to design a framework that would allow for flexibility and
multiple paths to achievement under this category while recognizing
MIPS eligible clinicians' efforts at all levels. Part of this framework
requires moving away from the concept of requiring a single threshold
for a measure, and instead incentivizes continuous improvement, and
recognizes onboarding efforts among late adopters and MIPS eligible
clinicians facing continued challenges in full implementation of CEHRT
in their practice.
Below is a summary of the comments received on our overall approach
to the advancing care information performance category under MIPS:
Comment: A commenter did not support the name change, expressing
concern that it is attempting to draw a distinction without a
difference and is going to cause confusion. The commenter urged CMS to
return to the term ``meaningful use''.
Response: We believe that the name ``advancing care information''
is appropriate to distinguish the MIPS performance category from
meaningful use under the EHR Incentive Programs. We note that the term
``meaningful use,'' still applies for purposes of the Medicare and
Medicaid EHR Incentive Programs. The reporting requirements and scoring
to demonstrate meaningful use were established in regulation under the
EHR Incentive Programs and vary substantially from the requirements and
scoring finalized for the advancing care
[[Page 77201]]
information performance category in the MIPS program.
(2) Advancing Care Information Performance Category Within MIPS
In defining the advancing care information performance category for
the MIPS, we considered stakeholder feedback and lessons learned from
our experience with the Medicare EHR Incentive Program. Specifically,
we considered feedback from the Stage 1 (75 FR 44313) and Stage 2 (77
FR 53967) EHR Incentive Program rules, and the 2015 EHR Incentive
Programs final rule (80 FR 62769), as well as comments received from
the MIPS and APMs RFI (80 FR 59102). We have learned from this feedback
that clinicians desire flexibility to focus on health IT implementation
that is right for their practice. We have also learned that updating
software, training staff and changing practice workflows to accommodate
new technology can take time, and that clinicians need time and
flexibility to focus on the health IT activities that are most relevant
to their patient population. Clinicians also desire consistent
timelines and reporting requirements to simplify and streamline the
reporting process. Recognizing this, we have worked to align the
advancing care information performance category with the other MIPS
performance categories, which would streamline reporting requirements,
timelines and measures in an effort to reduce burden on MIPS eligible
clinicians.
The implementation of the advancing care information performance
category is an important opportunity to increase clinician and patient
engagement, improve the use of health IT to achieve better patient
outcomes, and continue to meet the vision of enhancing the use of CEHRT
as defined under the HITECH Act. In the proposed rule (81 FR 28220), we
proposed substantial flexibility in how we would assess MIPS eligible
clinician performance for the new advancing care information
performance category. We proposed to emphasize performance in the
objectives and measures that are the most critical and would lead to
the most improvement in the use of health IT to advance health care
quality. We intend to promote innovation so that technology can be
interconnected easily and securely, and data can be accessed and
directed where and when it is needed to support patient care. These
objectives include Patient Electronic Access, Coordination of Care
Through Patient Engagement and Health Information Exchange, which are
essential to leveraging CEHRT to improve care. At the same time, we
proposed to eliminate reporting on objectives and measures in which the
vast majority of clinicians already achieve high performance--which
would reduce burden, encourage greater participation and direct MIPS
eligible clinicians' attention to higher-impact measures. Our proposal
balances program participation with rewarding performance on high-
impact objectives and measures, which we believe would make the overall
program stronger and further the goals of the HITECH Act.
(a) Advancing the Goals of the HITECH Act in MIPS
Section 1848(o)(2)(A) of the Act requires that the Secretary seek
to improve the use of EHRs and health care quality over time by
requiring more stringent measures of meaningful use. In implementing
MIPS and the advancing care information performance category, we sought
to improve and encourage the use of CEHRT over time by adopting a new,
more flexible scoring methodology, as discussed in the proposed rule
(81 FR 28220) that would more effectively allow MIPS eligible
clinicians to reach the goals of the HITECH Act, and would allow MIPS
eligible clinicians to use EHR technology in a manner more relevant to
their practice. This new, more flexible scoring methodology puts a
greater focus on Patient Electronic Access, Coordination of Care
Through Patient Engagement, and Health Information Exchange--objectives
we believe are essential to leveraging CEHRT to improve care by
engaging patients and furthering interoperability. This methodology
would also de-emphasize objectives in which clinicians have
historically achieved high performance with median performance rates of
over 90 percent for the last 2 years. We believe shifting focus away
from these objectives would reduce burden, encourage greater
participation, and direct attention to other objectives and measures
which have significant room for continued improvement. Through this
flexibility, MIPS eligible clinicians would be incentivized to focus on
those aspects of CEHRT that are most relevant to their practice, which
we believe would lead to improvements in health care quality.
We also sought to increase the adoption and use of CEHRT by
incorporating such technology into the other MIPS performance
categories. For example, in section II.E.6.a.(2)(f) of the proposed
rule (81 FR 28247), we proposed to incentivize electronic reporting by
awarding a bonus point for submitting quality measure data using CEHRT.
Additionally, in section II.E.5.f. of the proposed rule (81 FR 28209),
we aligned some of the activities under the improvement activities
performance category such as Care Coordination, Beneficiary Engagement
and Achieving Health Equity with a focus on enhancing the use of CEHRT.
We believe this approach would strengthen the adoption and use of
certified EHR systems and program participation consistent with the
provisions of section 1848(o)(2)(A) of the Act.
Below is a summary of the comments received regarding our overall
approach to requirements under the advancing care information
performance category:
Comment: Many commenters noted that what we proposed is even more
complicated than Stage 3 of meaningful use. Most commenters appreciated
the increased flexibility. One commenter appreciated the proposal but
did not believe that it went far enough. They noted that there should
be widespread health data interoperability throughout the clinical data
ecosystem and not just between meaningful users. Many commenters did
not support the retention of the all-or-nothing approach to scoring for
the advancing care information performance category. Many wanted a less
prescriptive approach to allow clinicians to be creative in applying
technology to their own unique workflows. Some noted that clinicians
should not be penalized for actions that they cannot control such as
patient actions in certain measures. One recommended that CMS focus its
efforts on increasing functional interoperability between and among EHR
vendors. Another commenter explained that the CMS efforts to date do
not go far enough toward the attainment of widespread health data
interoperability. Further CMS should provide advancing care information
performance category credit for activities that demonstrate a MIPS
eligible clinician's use of digital clinical data to inform patient
care. Many noted that this category is too similar to the existing
meaningful use framework and should be further modified.
Response: We have carefully considered and will address these
comments in more detail in the following sections of this final rule
with comment period as we further describe the final policies for the
advancing care information performance category. We note that within
the proposed requirements for the performance category, we sought to
balance the new requirements under MACRA with our goal to allow greater
flexibility and providing consistency for clinicians with prior
experience in the Medicare and Medicaid EHR Incentive Programs. This
consistency includes maintaining
[[Page 77202]]
the definition of CEHRT (as adapted from the EHR Incentive Program) and
specifications for the applicable measures. We believe this consistency
will ease the transition to MIPS and allow MIPS eligible clinicians to
adapt to the new program requirements quickly and with ease. We also
believe this will aid EHR vendors in their development efforts for MIPS
as many of the requirements are consistent with prior policy finalized
for the EHR Incentive Program in previous years.
We hope to continue to work with our stakeholders over the coming
years so that we can continue to improve the framework and
implementation of this performance category in order to improve health
outcomes for patients across the country.
(b) Future Considerations
The restructuring of program requirements described in this final
rule with comment period are geared toward increasing participation and
EHR adoption. We believe this is the most effective way to encourage
the adoption of CEHRT, and introduce new MIPS eligible clinicians to
the use of certified EHR technology and health IT overall.
We will continue to review and evaluate MIPS eligible clinician
performance in the advancing care information performance category, and
will consider evolutions in health IT over time as it relates to this
performance category. Based on our ongoing evaluation, we expect to
adopt changes to the scoring methodology for the advancing care
information performance category to ensure the efficacy of the program
and to ensure increased value for MIPS eligible clinicians and the
Medicare Program, as well as to adopt more stringent measures of
meaningful use as required by section 1848(o)(2)(A) of the Act.
Potential changes may include establishing benchmarks for MIPS
eligible clinician performance on the advancing care information
performance category measures, and using these benchmarks as a baseline
or threshold for future reporting. This may include scoring for
performance improvement over time and the potential to reevaluate the
efficacy of measures based on these analyses. For example, in future
years we may use a MIPS eligible clinician's prior performance on the
advancing care information performance category measures as comparison
for the subsequent year's performance category score, or compare a MIPS
eligible clinician's performance category score to peer groups to
measure their improvement and determine a performance category score
based on improvement over those benchmarks or peer group comparisons.
This type of approach would drive continuous improvement over time
through the adoption of more stringent performance standards for the
advancing care information performance category measures.
We are committed to continual review, improvement and increased
stringency of the advancing care information performance category
measures as directed under section 1848(o)(2)(A) of the Act both for
the purposes of ensuring program efficacy, as well as ensuring value
for the MIPS eligible clinicians reporting the advancing care
information performance category measures. We solicited comment on
further methods to increase the stringency of the advancing care
information performance category measures in the future.
We additionally solicited comment on the concept of a holistic
approach to health IT--one that we believe is similar to the concept of
outcome measures in the quality performance category in the sense that
MIPS eligible clinicians could potentially be measured more directly on
how the use of health IT contributes to the overall health of their
patients. Under this concept, MIPS eligible clinicians would be able to
track certain use cases or patient outcomes to tie patient health
outcomes with the use of health IT.
We believe this approach would allow us to directly link health IT
adoption and use to patient outcomes, moving MIPS beyond the
measurement of EHR adoption and process measurement and into a more
patient-focused health IT program. From comments and feedback we have
received from the health care provider community, we understand that
this type of approach would be a welcome enhancement to the measurement
of health IT. At this time, we recognize that technology and
measurement for this type of program is currently unavailable. We
solicited comment on what this type of measurement would look like
under MIPS, including the type of measures that would be needed within
the advancing care information performance category and the other
performance categories to measure this type of outcome, what
functionalities with CEHRT would be needed, and how such an approach
could be implemented.
The following is a summary of the comments we received:
Comment: Several commenters expressed an interest in advancing the
use of certified health IT in a clinical setting. Some commenters
suggested combining advancing care information performance category
measures and improvement activities in the improvement activities
performance category, though cautioned that improvement activities
should not require the use of CEHRT, more so that CEHRT should be
optional for improvement activities and should allow MIPS eligible
clinicians to earn credit in the advancing care information performance
category. Some commenters recommended that CMS award credit in both the
advancing care information performance category and improvement
activities performance category for overlapping activities.
Response: We agree that tying applicable improvement activities
under the improvement activities performance category to the objectives
and measures under the advancing care information performance category
would reduce reporting burden for MIPS eligible clinicians. Our first
step toward that goal of reducing reporting burden, and toward a more
holistic approach to EHR measurement is to award a bonus score in the
advancing care information performance category if a MIPS eligible
clinician attests to completing certain improvement activities using
CEHRT functionality. We believe tying these performance categories
encourages MIPS eligible clinicians to use their CEHRT products not
only for documenting patient care, but also for improving their
clinical practices by using their CEHRT in a meaningful manner that
supports clinical practice improvement. The objectives and measures of
the advancing care information performance category measure specific
functions of CEHRT which are the building blocks for advanced use of
health IT. In the improvement activities performance category, these
same functions may be tied to improvement activities which focus on a
specific improvement goal or outcome for continuous improvement in
patient care.
In Table 8, we identify a set of improvement activities from the
improvement activities performance category that can be tied to the
objectives, measures, and CEHRT functions of the advancing care
information performance category and would thus qualify for the bonus
in the advancing care information performance category. For further
explanation of these improvement activities, we refer readers to the
discussion in section II.E.5.f. of this final rule with comment period.
While we note that these activities can be greatly enhanced through the
use of CEHRT, we are not suggesting that these activities require the
use of CEHRT for the purposes of
[[Page 77203]]
reporting in the improvement activities performance category. More so,
we are suggesting that the use of CEHRT in carrying out these
activities can further the outcomes of clinical practice improvement,
and thus, we are awarding a bonus score in the advancing care
information performance category if a MIPS eligible clinician can
attest to using the associated CEHRT functions when carrying out the
activity. A MIPS eligible clinician attesting to using CEHRT for
improvement activities would use the same certification criteria in
completing the improvement activity as they would for the measures
under advancing care information as listed in Table 8; for the 2017
performance period, this may include 2014 or 2015 Edition CEHRT. For
example, for the first improvement activity in Table 8, in which a MIPS
eligible clinician would provide 24/7 access for advice about urgent
and emergent care, a MIPS eligible clinician may accomplish this
through expanded practice hours, use of alternatives to increase access
to the care team such as e-visits and phone visits, and/or provision of
same-day or next-day access. The Secure Messaging measure under the
advancing care information performance category requires that a secure
message was sent using the electronic messaging function of CEHRT to
the patient (or the patient-authorized representative), or in response
to a secure message sent by the patient (or the patient-authorized
representative). If secure messaging functionality is used to provide
24/7 access for advice about urgent and emergent care(for example,
sending or responding to secure messages outside business hours), this
would meet the requirement of using CEHRT to complete the improvement
activity and would qualify for the advancing care information bonus
score.
[GRAPHIC] [TIFF OMITTED] TR04NO16.000
[[Page 77204]]
[GRAPHIC] [TIFF OMITTED] TR04NO16.001
[[Page 77205]]
[GRAPHIC] [TIFF OMITTED] TR04NO16.002
[[Page 77206]]
[GRAPHIC] [TIFF OMITTED] TR04NO16.003
[[Page 77207]]
[GRAPHIC] [TIFF OMITTED] TR04NO16.004
[[Page 77208]]
[GRAPHIC] [TIFF OMITTED] TR04NO16.005
[[Page 77209]]
[GRAPHIC] [TIFF OMITTED] TR04NO16.006
BILLING CODE 4120-01-C
After consideration of the comments, we will award a 10 percent
bonus in the advancing care information performance category if a MIPS
eligible clinician attests to completing at least one of the
improvement activities specified in Table 8 using CEHRT. We note that
10 percent is the maximum bonus a MIPS eligible clinician will receive
whether they attest to using CEHRT for one or more of the activities
listed in the table. This bonus is intended to support progression
toward holistic health IT use and measurement; attesting to even one
improvement activity demonstrates that the MIPS eligible clinician is
working toward this holistic approach to the use of their CEHRT. We
additionally note that the weight of the improvement activity has no
bearing on the bonus awarded in the advancing care information
performance category.
We are seeking comment on this integration of the improvement
activities with the advancing care information performance category,
and other ways to further the advancement of health IT measurement.
[[Page 77210]]
(3) Clinical Quality Measurement
Section 1848(o)(2)(A)(iii) of the Act requires the reporting of
CQMs using CEHRT. Section 1848(q)(5)(B)(ii)(II) of the Act provides
that under the methodology for assessing the total performance of each
MIPS eligible clinician, the Secretary shall, for a performance period
for a year, for which a MIPS eligible clinician reports applicable
measures under the quality performance category through the use of
CEHRT, treat the MIPS eligible clinician as satisfying the CQM
reporting requirement under section 1848(o)(2)(A)(iii) of the Act for
such year. We note that in the context and overall structure of MIPS,
the quality performance category allows for a greater focus on patient-
centered measurement, and multiple pathways for MIPS eligible
clinicians to report their quality measure data. Therefore, we did not
propose separate requirements for CQM reporting within the advancing
care information performance category and instead would require
submission of quality data for measures specified for the quality
performance category, in which we encourage reporting of CQMs with data
captured in CEHRT. We refer readers to section II.E.5.a.of the proposed
rule (81 FR 28184-28196) for discussion of reporting of CQMs with data
captured in CEHRT under the quality performance category.
Below is a summary of the comments received regarding CQM reporting
for the advancing care information category:
Comment: Many commenters supported our proposal not to include the
submission of CQMs in this category. Several noted that this
elimination will reduce burden for MIPS eligible clinicians, streamline
reporting and reduce overlap. Others supported the elimination of
duplicative reporting that existed under PQRS and the EHR Incentive
Programs.
Response: We appreciate commenters' support and note that the
submission of CQMs is a requirement for the Medicare EHR Incentive
Program. For the advancing care information performance category, we
will require submission of quality data for measures specified for the
quality performance category, in which we encourage reporting of CQMs
with data captured in CEHRT. This approach helps to avoid unnecessary
overlap and duplicative reporting. Therefore, we have not included
separate requirements for clinical quality measurement in the advancing
care information performance category, and direct readers to the
quality performance category discussed in section II.E.5.b. of this
final rule with comment period for information on clinical quality
measurement.
(4) Performance Period Definition for Advancing Care Information
Performance Category
In the Medicare and Medicaid Programs; Electronic Health Record
Incentive Program--Stage 3 proposed rule, we proposed to eliminate the
90-day EHR reporting period beginning in 2017 for EPs who had not
previously demonstrated meaningful use, with a limited exception for
the Medicaid EHR Incentive Program (80 FR 16739-16740, 16774-16775). We
received many comments from respondents stating their preference for
maintaining the 90-day EHR reporting period to allow first time
participants to avoid payment adjustments. In addition, commenters
indicated that the 90-day time period reduced administrative burden and
allowed for needed time to adapt their EHRs to ensure they could
achieve program objectives. As a result, we did not finalize our
proposal and established a 90-day EHR reporting period for all EPs in
2015 and for new participants in 2016, as well as a 90-day EHR
reporting period for new participants in 2015, 2016, and 2017 with
regard to the payment adjustments (80 FR 62777-62779; 62904-62906). In
addition we have proposed a 90-day EHR reporting period in 2016 for the
EHR Incentive Programs in a recent proposed rule, the Calendar Year
(CY) 2017 Changes to the Hospital Outpatient Prospective Payment System
(OPPS) and Ambulatory Surgical Center (ASC) (81 FR 45753).
Moving forward, the implementation of MIPS creates a critical
opportunity to align performance periods to ensure that quality,
improvement activities, cost, and the advancing care information
performance categories are all measured and scored based on the same
period of time. We believe this would lower reporting burden, focus
clinician quality improvement efforts and align administrative actions
so that MIPS eligible clinicians can use common systems and reporting
pathways.
Under MIPS, we proposed to align the performance period for the
advancing care information performance category to the proposed MIPS
performance period of one full calendar year and the intent of the
proposal was to reduce reporting burden and streamline requirements so
that MIPS eligible clinicians and third party intermediaries, such as
registries and QCDRs, would have a common timeline for data submission
to all performance categories (81 FR 28179-28181). Therefore, we noted
there would not be a separate 90-day performance period for the
advancing care information performance category and MIPS eligible
clinicians would need to submit data based on performance period
starting January 1, 2017, and ending December 31, 2017 for the first
year of MIPS. We also stated that MIPS eligible clinicians that only
have data for a portion of the year can still submit data, be assessed
and be scored for the advancing care information performance category
(81 FR 28179-28181). Under that proposal, MIPS eligible clinicians
would need to possess CEHRT and report on the objectives and measures
(without meeting any thresholds) during the calendar year performance
period to achieve the advancing care information performance category
base score. Finally, we stated that MIPS eligible clinicians would be
required to submit all of the data they have available for the
performance period, even if the time period they have data for is less
than one full calendar year.
The following is a summary of the comments we received regarding
our advancing care information performance period proposal.
Comment: The majority of commenters did not support our proposal
for a performance period of one full calendar year. Instead they
overwhelmingly recommended a 90-day performance period in 2017.
Commenters noted the need for time and resources to understand and
adjust to the new MIPS program. Others suggested that 90 days would
give MIPS eligible clinicians flexibility to acquire and implement
health IT products. A commenter noted that a shorter performance period
would enable MIPS eligible clinicians to adopt innovative uses of
technology as it would permit them to test new health IT solutions.
Additionally with the final rule with comment period not expected until
late in 2016, commenters noted there is not sufficient time to review
and understand the rule and begin data collection on January 1, 2017.
Other commenters noted that MIPS eligible clinicians must perform
improvement activities for the improvement activities performance
category for at least a 90-day performance period, and suggested
adopting the same for the advancing care information performance
category as it would create alignment. Some commenters requested a
performance period of 90-days for the first several years of the
program. A few recommended a 90-day performance period every time a new
edition of
[[Page 77211]]
CEHRT is required. Others suggested partial year reporting or reporting
for a quarter. One recommended that solo practitioners report for 60
days. We note that only a few commenters supported our proposal.
Response: We understand the challenges of a full year performance
period. As discussed in the proposed rule (81 FR 28179 through 28181),
MIPS eligible clinicians that only have data for a portion of the year
can still submit data, be assessed and be scored for the advancing care
information performance category, and thus, would not need to report
for one full year, rather, they could report whatever data they had
available even if that data represented less than a full-year period.
Additionally, we understand the commenters' concerns and rationale
for requesting a 90-day performance period. As discussed in section
II.E.4. of this final rule with comment period, for the first
performance period of CY 2017, we will accept a minimum of 90 days of
data within CY 2017, although we greatly encourage MIPS eligible
clinicians to submit data for the full year performance period. Also in
recognition of the switch from CEHRT certified to the 2014 Edition to
CEHRT certified to the 2015 Edition, for the 2018 performance period we
will also accept a minimum of 90 days of data within CY 2018. We refer
readers to section II.E.4. of this final rule with comment period for
further discussion about the MIPS performance period and the 90-day
minimum.
Comment: One commenter encouraged CMS to extend the transition
timeframe to performance periods under MIPS in 2017 and 2018. They
indicated that their vendors struggle to provide budgetary estimates
needed to plan staff and financial resources due to the lack of clarity
on what would be required for the MIPS program.
Response: We recognize that vendors will require varying levels of
effort to transition their technology to the MIPS reporting
requirements. We note that our proposal to adopt substantively the same
definition of CEHRT for the 2015 Edition under MIPS that was adopted in
the 2015 EHR Incentive Programs final rule was intended to provide
consistency for MIPS eligible clinicians, as well as to allow EHR
vendors to begin development based on the specifications finalized in
October of 2015 and released by ONC for testing beginning in 2016
unimpeded by the timeline related to any rulemaking for the MIPS
program. This would allow vendors to work toward certification on a
longer timeline and allow MIPS eligible clinicians to adopt an
implement the technology in preparation for the performance period in
2018. The MIPS performance period in 2017 will serve as a transition
year for MIPS eligible clinicians, vendors and others parties
supporting MIPS eligible clinicians. Further, in section II.E.5.a. of
this final rule with comment period, we have established multiple
reporting mechanisms to allow MIPS eligible clinicians to report their
advancing care information data in the event that their vendor is
unable to support new submission requirements. We are adopting for MIPS
the 2017 Advancing Care Information Transition objectives and measures
(referred to in the proposed rule as Modified Stage 2 objectives and
measures) and Advancing Care Information objectives and measures
(referred to in the proposed rule as adapted from the Stage 3
objectives and measures) and allowing MIPS eligible clinicians and
groups to use technology certified to either the 2014 Edition or the
2015 Edition or a combination of the two editions to support their
selection of objectives and measures for 2017. We intend this
consistency with prior programs to help ease the transition and reduce
the development work needed to transition to MIPS. Finally, we will
accept a minimum of any consecutive 90 days in the 2018 performance
period for the advancing care information performance category to
support eligible clinicians and groups as they transition to technology
certified to the 2015 Edition for use in 2018. For these reasons, we
believe a 1 year transition during the 2017 MIPS performance period is
sufficient.
After consideration of the public comments received, we are
finalizing our proposal to align the performance period for the
advancing care information performance category with the MIPS
performance period of one full calendar year. For the first performance
period of MIPS (CY 2017), we will accept a minimum of 90 consecutive
days of data in CY 2017, however, we encourage MIPS eligible clinicians
to report data for the full year performance period. For the second
performance period of MIPS (CY 2018), we will accept a minimum of 90
consecutive days of data in 2018, however, we encourage MIPS eligible
clinicians to report data for the full year performance period. We
refer readers to section II.E.4. of this final rule with comment period
for further discussion of the MIPS performance period.
(5) Advancing Care Information Performance Category Data Submission and
Collection
(a) Definition of Meaningful EHR User and Certification Requirements
In the 2015 EHR Incentive Programs final rule (80 FR 62873), we
outlined the requirements for EPs using CEHRT in 2017 for the Medicare
and Medicaid EHR Incentive Programs as it relates to the objectives and
measures they select to report. In the proposed rule, we proposed to
adopt a definition of CEHRT at Sec. 414.1305 for MIPS eligible
clinicians that is based on the definition that applies in the EHR
Incentive Programs under Sec. 495.4.
We proposed for 2017, the first MIPS performance period, MIPS
eligible clinicians would be able to use EHR technology certified to
either the 2014 or 2015 Edition certification criteria as follows:
A MIPS eligible clinician who only has technology
certified to the 2015 Edition may choose to report: (1) On the
objectives and measures specified for the advancing care information
performance category in section II.E.5.g.(7) of the proposed rule (81
FR 28221 through 28223), which correlate to Stage 3 requirements; or
(2) on the alternate objectives and measures specified for the
advancing care information performance category in section II.E.5.g.(7)
of the proposed rule (81 FR 28223 and 28224), which correlate to
modified Stage 2 requirements.
A MIPS eligible clinician who has technology certified to
a combination of 2015 Edition and 2014 Edition may choose to report:
(1) On the objectives and measures specified for the advancing care
information performance category in section II.E.5.g.(7) of the
proposed rule (81 FR 28221 through 28223), which correlate to Stage 3;
or (2) on the alternate objectives and measures specified for the
advancing care information performance category as described in section
II.E.5.g.(7) of the proposed rule (81 FR 28223 and 28224), which
correlate to modified Stage 2, if they have the appropriate mix of
technologies to support each measure selected.
A MIPS eligible clinician who only has technology
certified to the 2014 Edition would not be able to report on any of the
measures specified for the advancing care information performance
category described in section II.E.5.g.(7) of the proposed rule (81 FR
28221 through 28223) that correlate to a Stage 3 measure that requires
the support of technology certified to the 2015 Edition. These MIPS
eligible clinicians would be
[[Page 77212]]
required to report on the alternate objectives and measures specified
for the advancing care information performance category as described in
section II.E.5.g.(7) of the proposed rule (81 FR 28223 and 28224),
which correlate to modified Stage 2 objectives and measures.
We proposed beginning with the performance period in 2018, MIPS
eligible clinicians:
Must only use technology certified to the 2015 Edition to
meet the objectives and measures specified for the advancing care
information performance category in section II.E.5.g.(7) of the
proposed rule (81 FR 28222 and 28223), which correlate to Stage 3.
We welcomed comments on the proposals, which were intended to
maintain consistency across MIPS, the Medicare EHR Incentive Program
and the Medicaid EHR Incentive Program.
Finally, we proposed to define at Sec. 414.1305 a meaningful EHR
user under MIPS as a MIPS eligible clinician who possesses CEHRT, uses
the functionality of CEHRT, and reports on applicable objectives and
measures specified for the advancing care information performance
category for a performance period in the form and manner specified by
CMS.
The following is a summary of the comments we received regarding
our proposal for EHR certification requirements.
Comment: Most commenters supported the proposal to allow MIPS
eligible clinicians to use either technology certified to 2014 or 2015
Edition for the performance period in 2017. Many commenters urged CMS
to allow MIPS eligible clinicians to continue to use either EHR
technology certified to the 2014 or 2015 Edition in the performance
period 2018 and beyond, citing concerns over the time required for
health IT development and certification and MIPS eligible clinician
readiness concerns that the 2015 Edition technology may not be
available in time for the performance period or reporting timeframe. A
few commenters suggested that flexibility in the form of a hardship
exception to reporting to MIPS be offered to accommodate MIPS eligible
clinicians who are unable to implement EHR technology certified to the
2015 Edition in time for the 2018 performance period. Other commenters
found the requirement to use EHR technology certified to the 2015
Edition in 2018 unacceptable. Commenters noted that as of the comment
due date there are zero products certified to the 2015 Edition and
recommended that we allow the use of products certified to the 2014
Edition through 2020. Some commenters were also concerned that the
small amount of products certified to the 2015 Edition would require
MIPS eligible clinicians to find alternatives to meeting the advancing
care information requirements and possibly limit those in APMs from
utilizing the benefits of the new technology.
Response: We appreciate the comments and feedback we received, and
the support of the proposal for performance periods in 2017 to allow
the use of technology certified to the 2014 or 2015 Edition or a
combination of the two. We believe this will allow MIPS eligible
clinicians the flexibility to transition to EHR technology certified to
the 2015 Edition for use for performance periods in 2018 in a manner
that works best for their systems, workflows, and clinical needs. We
additionally understand the concerns raised by commenters regarding the
timeline to implement the 2015 Edition in time for use for performance
periods in 2018. We note the requirements for technology certified to
the 2015 Edition were established in October 2015 in ONC's final rule
titled 2015 Edition Health Information Technology (Health IT)
Certification Criteria, 2015 Edition Base Electronic Health Record
(EHR) Definition, and ONC Health IT Certification Program Modifications
(80 FR 62602-62759). The EHR Incentive Programs final rule adopted the
requirement that EPs, eligible hospitals, and CAHs use technology
certified to the 2015 Edition beginning in 2018. We intend to maintain
continuity for MIPS eligible clinicians and health IT vendors who may
already have CEHRT or who have begun planning for a transition to
technology certified to the 2015 Edition based on the definition of
CEHRT finalized for the EHR Incentive Programs in the 2015 EHR
Incentive Programs final rule (80 FR 62871 through 62889). Therefore,
there are no new certification requirements in the definition we are
finalizing for MIPS eligible clinicians participating in the advancing
care information performance category of MIPS at Sec. 414.1305 in
order to maintain consistency with the EHR Incentive Programs CEHRT
definition at 42 CFR 495.4. Our proposal to adopt a substantively
similar definition of CEHRT that was finalized in the 2015 EHR
Incentive Programs final rule was intended to provide consistency for
MIPS eligible clinicians and also to allow EHR vendors to begin
development based on the specifications finalized in October of 2015
and released by ONC for testing beginning in 2016 unimpeded by the
timeline related to any rulemaking for the MIPS program. This allows
vendors to work toward certification on a longer timeline and allows
MIPS eligible clinicians to adopt an implement the technology in
preparation for the performance period in 2018. In addition, in order
to allow eligible clinicians and groups adequate time to transition to
EHR technology certified to the 2015 Edition for use in CY 2018, we
will accept a minimum of 90 consecutive days of data within the CY 2018
performance period for the advancing care information performance
category. In partnership with ONC, we are monitoring the development
and certification process for health IT products certified to the 2015
Edition and will continue to gauge MIPS eligible clinician readiness
for the 2018 performance period. At this time, we believe it is
appropriate to require the use of EHR technology certified to the 2015
Edition for the performance period in 2018 and encourage MIPS eligible
clinicians to work with their EHR vendors in the coming months to
prepare for the transition to 2015 Edition in for the performance
period in CY 2018.
Comment: One commenter suggested that the CEHRT definition be
expanded to include requirements beyond those finalized for meeting the
advancing care information performance category and commenters noted
that vendors other than EHR vendors could support the criteria listed
in the proposed rule, to include Health Information Exchanges (HIE) or
Health Information Service Providers (HISPs).
Response: The definition of CEHRT does contain elements that are
not included in the advancing care information performance category. As
noted in the proposed rule (81 FR 28218-28219), and consistent with
prior EHR Incentive Program policy, removing a measure from the
reporting requirements does not remove the functions supporting that
measure from the definition of CEHRT unless we make corresponding
changes to that definition. Therefore, a MIPS eligible clinician must
implement that function in their practice in order to have their system
meet the technological specifications required for participation in the
program. For example, in the 2015 EHR Incentive Programs final rule (80
FR 62786), we noted that the Stage 1 ''Record Demographics'' measure
was designated as topped out and no longer required for reporting, but
CEHRT must still capture and record demographics as structured data
using the appropriate standards. For MIPS, we did not propose to
include the CPOE and CDS objectives and measures in the
[[Page 77213]]
advancing care information performance category although the technology
functions supporting these measures were included in our proposed
definition of CEHRT for MIPS.
Comment: Some commenters were encouraged by the CMS' commitment to
collaborate with ONC on the 2015 Edition CEHRT requirements for MIPS to
align with the evolving standards to support health IT capabilities.
Response: We appreciate these comments and will continue to
collaborate with ONC on the alignment of MIPS requirements and CEHRT in
future rulemaking.
Comment: A few commenters requested that the definitions of CEHRT
incorporate the roles of non-physician practitioners, including Nurse
Practitioners (NPs), Physician Assistants (PAs), Certified Registered
Nurse Anesthetists (CRNAs) and Clinical Nurse Specialists (CNSs). They
noted that current EHR vendor software usually does not allow non-
physician practitioners to make entries or be identified. The
commenters suggested that CEHRT vendors should be required include
provisions so that non-physician practitioners can also utilize the
CEHRT so that they can meet MIPS requirements.
Response: The requirements for the use of CEHRT do not specify the
type of provider or clinician that can enter data, nor do ONC's
certification criteria in any way limit the entry of data by non-
physician practitioners. In some states, the MIPS eligible clinicians
mentioned by the commenter may already be participating in the Medicaid
EHR Incentive Programs as an EP and using CEHRT to support their
clinical practice. In addition, many practices across a wide range of
settings where EPs have participated in the Medicare EHR Incentive
Programs have developed different workflows to meet their practice
needs including the various staff beyond the eligible clinician that
enter data. We encourage MIPS eligible clinicians and groups to work
with their vendor, and with their own practice and clinical workflows
to identify and establish best practices for data capture and data
mapping to support their unique practice needs.
Comment: Some commenters recommended that CMS consider ways to
measure possible clinical workflow disruptions caused by health IT
(EHRs). The commenters suggested that CMS use Medicare beneficiary
surveys, focus groups, patient reported outcome measures, and the CAHPS
for MIPS survey; and to incorporate those results when designing health
IT specifications and regulations to be used across settings.
Response: We appreciate the feedback and will take this suggestion
into consideration in the future. We encourage MIPS eligible clinicians
to work with their EHR vendor to improve the clinical workflow in a way
that best suits their individual practice needs.
Comment: Other commenters noted that while patient access to data
is important, MIPS eligible clinicians also need interoperable data
from a variety of sources to integrate seamlessly into their work flow.
The commenters believe that third party applications will play a major
role in satisfying this need to ensure data ``quality'' so that
physicians get the most relevant data in a useable format, when and
where they need it.
Response: CMS and ONC agree with the comments that interoperability
and the seamless integration of data and systems into clinical
workflows is essential to improving health care quality. For this
reason, the 2015 Edition certification criteria include testing and
certification for API functionality as a certified health IT module (80
FR 62601-62759), as well as criteria related to ensuring the ability to
receive and consume electronic summary of care records from external
sources into the provider's EHR and to developing a path for bi-
directional exchange of immunization data with public health
registries.
After consideration of the comments we received, we are finalizing
our proposal regarding EHR certification requirements at Sec. 414.1305
as proposed and encourage MIPS eligible clinicians to prepare for the
migration to the 2015 Edition of CEHRT in 2018. In 2017, MIPS eligible
clinicians may use EHR technology certified to the 2014 Edition or the
2015 Edition or a combination of the two. We note that a MIPS eligible
clinician who only has technology certified to the 2014 Edition would
not be able to report certain measures specified for the advancing care
information performance category that correlate to a Stage 3 measure
for which there was no Stage 2 equivalent. These MIPS eligible
clinicians may instead report the objectives and measures specified for
the advancing care information performance category which correlate to
Modified Stage 2 objectives and measures. In 2018, MIPS eligible
clinicians must use EHR technology certified to the 2015 Edition.
The following is a summary of the comments we received regarding
our proposal for defining a meaningful EHR user under MIPS.
Comment: Many commenters expressed an overall desire to maintain a
moderate to high level standard and category weight for the distinction
of meaningful EHR user. These commenters noted that the definition of
meaningful EHR user will have an important impact on heath IT adoption
and that reducing the stringency or lowering the advancing care
information performance category weight in the MIPS final score could
hinder progress toward robust, person-centered use of health IT across
the health care industry.
Response: We agree that defining a meaningful EHR user is critical
for all of the reasons that the commenter raises; it is an important
piece of health IT adoption and promoting interoperability. We seek to
balance this critical aspect of EHR reporting with our desire to
increase widespread adoption of health IT and clinical standards among
MIPS eligible clinicians. We believe our final policies will encourage
more widespread adoption and use of health IT in a practice setting. We
are also dedicated to increasing the stringency of the measures
specified for the advancing care information performance category in
future years of the MIPS program to further the advancement of health
IT use.
After consideration of the public comments we received, we are
finalizing our proposal to define a meaningful EHR user for MIPS under
Sec. 414.1305 as a MIPS eligible clinician who possesses CEHRT, uses
the functionality of CEHRT, and reports on applicable objectives and
measures specified for the advancing care information performance
category for a performance period in the form and manner specified by
CMS.
(b) Method of Data Submission
Under the Medicare EHR Incentive Program, EPs attest to the
numerators and denominators for certain objectives and measures,
through a CMS Web site. For the purpose of reporting advancing care
information performance category objectives and measures under the
MIPS, we proposed at Sec. 414.1325 to allow for MIPS eligible
clinicians to submit advancing care information performance category
data through qualified registry, EHR, QCDR, attestation and CMS Web
Interface submission methods. Regardless of data submission method, all
MIPS eligible clinicians must follow the reporting requirements for the
objectives and measures to meet the requirements of the advancing care
information performance category.
We note that under this proposal, 2017 would be the first year that
EHRs (through the QRDA submission
[[Page 77214]]
method), QCDRs and qualified registries would be able to submit EHR
Incentive Program objectives and measures (as adopted for the advancing
care information performance category) to us, and the first time this
data would be reported through the CMS Web Interface. We recognize that
some Health IT vendors, QCDRs and qualified registries may not be able
to conduct this type of data submission for the 2017 performance period
given the development efforts associated with this data submission
capability. However, we are including these data submission mechanisms
in 2017 to support early adopters and to signal our longer-term
commitment to working with organizations that are agile, effective and
can create less burdensome data submission mechanisms for MIPS eligible
clinicians. We believe the proposed data submission methods could
reduce reporting burden by synchronizing reporting requirements and
data submission, and systems, allow for greater access and ease in
submitting data throughout the MIPS program. We note that specific
details about the form and manner for data submission will be addressed
by CMS in the future.
The following is a summary of the comments we received regarding
our proposal to allow for multiple methods for data submission for the
advancing care information performance category.
Comment: The majority of commenters supported the proposed data
submission approach to allow for MIPS eligible clinicians to submit
data for the advancing care information performance category through
multiple submission methods, which includes, for example, via
attestation, qualified registries, QCDRs, EHRs and CMS Web Interface.
Many agreed that the proposal alleviates the need for individual MIPS
eligible clinicians and groups to use a separate reporting mechanism to
report data for different performance categories.
Response: We appreciate the supportive comments and reiterate that
our goals include reducing the reporting burden, aligning reporting
requirements across MIPS performance categories, and supporting
efficient data submission mechanisms.
Comment: Some commenters expressed concern that many third party
data submission entities do not have the necessary data submission
functionality and will not have enough time to develop, distribute and
adopt the needed functionality for a performance period in 2017. One
commenter requested that CMS provide detailed guidance to vendors and
QCDRs as they implement data submission functionality. Another
commenter expressed concern about the potential for vendors and
developers of QCDRs and registries to fail to fulfill the technical
requirements for data submission and advised CMS to finalize a policy
indicating that MIPS eligible clinicians would not be penalized for
failure of data submission due to vendor issues. One commenter
suggested offering bonus points for the use of QCDRs or registry
adoption to recognize the investment needed to participate.
Response: We appreciate the concerns raised by commenters and note
that we intend to provide detailed guidance for EHR vendors, as well as
third party data intermediaries who submit data on behalf of MIPS
eligible clinicians to help them be successful in data submission.
However, we acknowledge that some EHRs, QCDRs and registry vendors may
not be able to support data submission for the advancing care
information performance category for 2017 due to the time needed to
develop the technology and functionality to collect and submit these
data. For this reason, as discussed in section II.E.5.a. of this final
rule with comment period, we offer MIPS eligible clinicians several
reporting mechanisms from which to choose. While we believe that in the
long term, it is more convenient for MIPS eligible clinicians to submit
data one time for all performance categories, we acknowledge that this
may not be possible in the transition year for the aforementioned
reasons. Therefore, we offer the option of attestation for those MIPS
eligible clinicians who's CEHRT, QCDR or registry are not prepared to
support advancing care information performance category data submission
in 2017. For further discussion of MIPS submission methods, we refer
readers to section II.E.5.a. of this final rule with comment period.
Comment: One commenter requested that CMS provide greater
flexibility in the submission standards set forth for health IT
vendors, particularly in the transition year of MIPS, including the
ability to submit data via QCDR XML. The commenter stated that QCDR
vendors often experience issues submitting data using the uniform
standards in QRDA implementation guides and that many QRDA variables
that are clinical in nature do not easily map to the variables in
CEHRT.
Response: We note that our proposal does allow for submission of
the advancing care information performance category data via QCDR, as
well as registry, CEHRT, CMS Web Interface and attestation. We believe
this flexibility allows MIPS eligible clinicians the ability to submit
through their chosen submission mechanism that is most appropriate for
their practice.
Comment: One commenter believed the attestation process is
cumbersome and expensive for large groups and suggested that CMS
develop a process that will allow larger groups to attest as a group.
Response: Because the EPs reporting under EHR Incentive Program
reported using their individual NPIs, attestation and data submission
was completed at the NPI level which was not conducive to groups
combining their data and attesting for all of their NPIs together. We
agree that this same approach under the MIPS would be cumbersome for
group submission. Under the MIPS, groups will have the ability to
attest or submit their advancing care information data through a
qualified registry, QCDR, EHR, attestation, or CMS Web Interface as a
group, meaning the data would be aggregated to the group level and
submitted once on behalf of all MIPS eligible clinicians within the
group. MIPS eligible clinicians will also have the ability to submit as
individuals, if their group is not submitting using the group method.
In these cases, the attestation or data submission would be done at the
individual (TIN/NPI) level.
Comment: One commenter recommended the mandatory publication of EHR
source code in order to reduce bias and errors.
Response: We appreciate the suggestion, however, we note that this
is outside our authority under section 1848(q) of the Act and outside
the scope of this rule.
We note that there were several other comments related to data
submission for MIPS, and we direct readers to section II.E.5.a. of this
final rule with comment period for discussion of those comments. After
consideration of the comments we received, we are finalizing our policy
as proposed.
(c) Group Reporting
Under the Medicare EHR Incentive Program, we adopted a reporting
mechanism for EPs that are part of a group, to attest using one common
form, or a batch reporting process. To determine whether those EPs
meaningfully used CEHRT, under that batch reporting process, we
assessed the individual performance of the EPs that made up the group,
not the group as a whole.
The structure of the MIPS and our desire to achieve alignment
across the MIPS performance categories appropriately necessitates the
ability to assess the performance of MIPS eligible
[[Page 77215]]
clinicians at the group level for all MIPS performance categories. We
believe MIPS eligible clinicians should be able to submit data as a
group, and be assessed at the group level, for all of the MIPS
performance categories, including the advancing care information
performance category. For this reason, we proposed a group reporting
mechanism for individual MIPS eligible clinicians to have their
performance assessed as a group for all performance categories in
section II.E.1.e. of the proposed rule (81 FR 28178 and 28179),
consistent with section 1848(q)(1)(D)(i)(I) & (II) of the Act.
Under this option, we proposed that performance on advancing care
information performance category objectives and measures would be
assessed and reported at the group level, as opposed to the individual
MIPS eligible clinician level. We note that the data submission
criteria would be the same when submitted at the group-level as if
submitted at the individual-level, but the data submitted would be
aggregated for all MIPS eligible clinicians within the group practice.
We believe this approach to data submission better reflects the team
dynamics of the group, and would reduce the overall reporting burden
for MIPS eligible clinicians that practice in groups, incentivize
practice-wide approaches to data submission, and provide enterprise-
level continuous improvements strategies for submitting data to the
advancing care information performance category. Please see section
II.E.1.e. of the proposed rule (81 FR 28178 and 28179) for more
discussion of how to participate as a group under MIPS.
The following is a summary of the comments we received regarding
our proposal to allow for group reporting starting in 2017.
Comment: The majority of commenters strongly support the allowance
of group reporting in the advancing care information performance
category. Reasons for support include the reduction in reporting
burden, as well as alignment with other MIPS performance categories.
Response: We appreciate the supportive comments.
Comment: Many commenters expressed concern about allowing group
reporting for the advancing care information performance category in
2017 given the short timeframe between the publication for this final
rule with comment period and the start of the 2017 performance period.
Commenters believe that this would offer too little time to implement
group reporting capabilities in CEHRT, stating that report logic will
require clear specifications and time for development and distribution
of report updates.
Response: We recognize that the implementation of group reporting
may require varying levels of effort for different practices and
therefore may not be the best choice for all MIPS eligible clinicians
for the 2017 performance period. However, we believe that making group
reporting available for performance periods in CY 2017 offers a
significant reduction in reporting burden for many group practices that
have a large number of MIPS eligible clinicians, all of whom would
otherwise have to report the MIPS requirements individually. We
additionally note that groups and MIPS eligible clinicians have the
ability to report through multiple reporting mechanisms providing
flexibility should their CEHRT be unable to support group reporting in
2017.
Comment: Some commenters requested clarification on how group
reporting of the base and performance scores will be calculated if one
or more individual MIPS eligible clinicians within a group practice
does not report on an objective or can claim an exclusion from
reporting on an objective. In addition, a few commenters asked how to
avoid counting more than once the unique patients seen by multiple MIPS
eligible clinicians within the group practice. They also asked for
detailed instructions for calculating the numerators and denominators
of the measures reported.
Response: We understand that additional explanation is needed in
order for groups to determine whether the group reporting option is
best for their practice.
As with group reporting for the other MIPS performance categories,
to report as a group, the group will need to aggregate data for all the
individual MIPS eligible clinicians within the group for whom they have
data in CEHRT. For those who choose to report as a group, performance
on the advancing care information performance category objectives and
measures would be reported and evaluated at the group level, as opposed
to the individual MIPS eligible clinician level. For example, the group
calculation of the numerators and denominators for each measure must
reflect all of the data from all individual MIPS eligible clinicians
that have been captured in CEHRT for the given advancing care
information measure. If the group practice has CEHRT that is capable of
supporting group reporting, they would submit the aggregated data
produced by the CEHRT. If the group practice does not have CEHRT that
is capable of or updated to support group reporting, the group would
aggregate the data by adding together the numerators and denominators
for each MIPS eligible clinician within the group for whom the group
has data captured in their CEHRT. If an individual MIPS eligible
clinician meets the criteria to exclude a measure, their data can be
excluded from the calculation of that particular measure only.
We understand and agree that it can be difficult to identify unique
patients across a group for the purposes of aggregating performance on
the advancing care information measures, particularly when that group
is using multiple CEHRT systems. We further recognize that for 2017,
groups may be using systems which are certified to different CEHRT
editions further adding to this challenge. We consider ``unique
patients'' to be individual patients treated by the group who would
typically be counted as one patient in the denominator of an advancing
care information measure. This patient may see multiple MIPS eligible
clinicians within the group, or may see MIPS eligible clinicians at
multiple group locations. When aggregating performance on advancing
care information measures for group reporting, we do not require that
the group determine that a patient seen by one MIPS eligible clinician
(or at one location in the case of groups working with multiple CEHRT
systems) is not also seen by another MIPS eligible clinician in the
group or captured in a different CEHRT system. While this could result
in the same patient appearing more than once in the denominator, we
believe that the burden to the group of identifying these patients is
greater than any gain in measurement accuracy. Accordingly, this final
policy will allow groups some flexibility as to the method for counting
unique patients in the denominators to accommodate these scenarios
where aggregation may be hindered by systems capabilities across
multiple CEHRT platforms. We note that this is consistent with our data
aggregation policy for providers practicing in multiple locations under
the EHR Incentive Program (77 FR 53982).
Comment: A few commenters voiced concerns that group reporting and
many EHR systems, particularly hospital EHRs, mask who actually
performs the service and may not recognize the ability of MIPS eligible
clinicians who are not physicians to provide and document care. For
example, non-physicians who are not considered MIPS eligible
clinicians, such as nurse-midwives, physical or occupational
[[Page 77216]]
therapists and psychologists often perform services and complete their
actions using CEHRT. However, the commenter notes that CEHRT
functionality usually does not offer the ability to distinguish which
clinician actually performed the action, thus making it difficult to
calculate an accurate numerator and denominator for measures in the
advancing care information performance category. One commenter
requested that CMS require that CEHRT be able to identify which
clinician is using the CEHRT, ensuring that clinicians other than
physicians are able to make entries and actions are attributed to MIPS
eligible clinicians.
Response: We appreciate the feedback and agree that there are
issues related to group reporting that we will continue to monitor as
the program develops. We note that the vast majority of commenters
supported the group reporting option as it represents a reduction in
reporting burden for MIPS eligible clinicians who choose to report as
groups rather than as individuals. As we move forward with the
advancing care information performance category we will be working with
ONC to refine capabilities in CEHRT that could further support group
reporting.
Comment: One commenter urged CMS to avoid issuing guidance that
assigns nurses the role of scribe or data entry for physicians because
this would adversely affect the quality of care delivered to patient.
Response: We do not intend to issue guidance that define or
redefine the role of non-physician practitioners, such as nurse
practitioners or nurse specialists.
After consideration of the comments, we are finalizing our proposal
to allow group reporting for the advancing care information performance
category with the additional explanation of data aggregation
requirements for group reporting provided in our response above,
particularly as it relates to aggregating unique patients seen by the
group.
For our final policy, we considered and rejected imposing a
threshold for group reporting. For example, in future years we may
require that groups can only submit their advancing care information
performance category data as a group if 50 percent or more of their
eligible patient encounters are captured in CEHRT. While we considered
this as an option for 2017, the transition year of MIPS, we chose not
to institute such a policy at this time and will instead consider it
for future years. We are seeking comment in this final rule with
comment period on what would be an appropriate threshold for group
reporting in future years.
We note that group reporting policies for the MIPS program,
including the other performance categories, are discussed in section
II.E.5.a. of this final rule with comment period, and we refer readers
to that section for additional discussion of group reporting.
(6) Reporting Requirements & Scoring Methodology
(a) Scoring Method
Section 1848(q)(5)(E)(i)(IV) of the Act, as added by section 101(c)
of the MACRA, states that 25 percent of the MIPS final score shall be
based on performance for the advancing care information performance
category. Therefore, we proposed at Sec. 414.1375 that performance in
the advancing care information performance category will comprise 25
percent of a MIPS eligible clinician's MIPS final score for payment
year 2019 and each year thereafter. We received many comments in the
MIPS and APMs RFI from stakeholders regarding the importance of
flexible scoring for the advancing care information performance
category and provisions for multiple performance pathways. We agree
that this is the best approach moving forward with the adoption and use
of CEHRT as it becomes part of a single coordinated program under the
MIPS. For the reasons described here and previously in this preamble,
we are proposing a methodology which balances the goals of
incentivizing participation and reporting while recognizing exceptional
performance by awarding points through a performance score. In this
methodology, we proposed at Sec. 414.1380(b)(4) that the score for the
advancing care information performance category would be comprised of a
score for participation and reporting, hereinafter referred to as the
``base score,'' and a score for performance at varying levels above the
base score requirements, hereinafter referred to as the ``performance
score''.
The following is a summary of the comments we received regarding
overall scoring for the advancing care information performance
category.
Comment: Overall, most commenters found the scoring to be
cumbersome, complex, and complicated and recommended that it be
simplified. Suggestions included removing distinction between the base
score and performance score. Others suggested removing objectives and
measures or moving them to other MIPS performance categories, such as
moving Public Health and Clinical Data Registry Reporting to the
improvement activities performance category. One commenter suggested
simplifying the assignment of points for each measure. For example,
they suggested that 10 percent per measure be awarded for the
following: 1. Patient Access; 2. Electronic Prescribing; 3.
Computerized Provider Order Entry (CPOE); 4. Patient-Specific
Education; 5. View, Download, Transmit; 6. Secure Messaging; 7.
Patient-Generated Health Data; 8. Patient Care Record Exchange; 9.
Request/Accept Patient Care Record; 10. Clinical Information
Reconciliation.
Response: We appreciate the constructive feedback from commenters.
Our priority is to finalize reporting requirements for the advancing
care information performance category that incentivizes performance and
reporting with minimal complexity and reporting burden. We have
addressed many of these comments and concerns in our final scoring
methodology outlined in section II.E.5.g.(6)(a) of this final rule with
comment period.
Comment: Some commenters appreciated the split between base and
performance scores in the advancing care information performance
category, citing the flexibility offered compared to the EHR Incentive
programs. Many commenters also praised the elimination of the
requirement to meet measure thresholds.
Response: We appreciate commenters' support for our proposal. Our
priority is to finalize a scoring methodology for the advancing care
information performance category that promotes the use of CEHRT
reporting requirements in an efficient, effective and flexible manner.
Comment: Some commenters did not support the elimination of measure
thresholds. They believed that incorporating measure thresholds enables
MIPS eligible clinicians to earn higher score for the advancing care
information performance score and would encourage a higher level of
success using CEHRT. Another commenter suggested replacing the base
score requirement of at least one in the numerator with a requirement
to meet a 5 percent threshold for each measure reported beginning for
the performance period of CY 2019.
Response: We believe the scoring approach, as proposed and then as
finalized in this final rule with comment period, promotes performance
on the advancing care information performance category measures by
rewarding high performance rather than requiring MIPS eligible
clinicians to meet one threshold across the board. We agree that in
future years of the program, we may consider higher minimum thresholds
for reporting, however, we also seek to allow flexibility for MIPS
[[Page 77217]]
eligible clinicians to report on the measures that are most meaningful
to their practice.
Comment: Most commenters supported the proposal to move away from
the overall all-or-nothing scoring approach previously used in the EHR
Incentive Programs. However, many commenters do not support the all-or-
nothing approach proposed to earn the base score and subsequent points
in the performance score, for the advancing care information
performance category. More than one commenter recommended offering
partial credit for each objective in the base score rather than an all-
or-nothing approach. Other comments include removing the base score and
only awarding points toward a performance score, as well as adding more
measure exclusions. Some suggested awarding points toward the
performance score even if the MIPS eligible clinician fails to meet a
base score.
Response: In order to provide more flexibility for MIPS eligible
clinicians, we have moved away from the all-or-nothing approach in our
final policy. We note that certain measures under our final policy
remain required measures in the base score. For example, section
1848(o)(2)(A) of the Act includes certain requirements that we have
chosen to implement through measures such as e-Prescribing, Send
Summary of Care (formerly Patient Care Record Exchange) and Request/
Accept Patient Care Record, and thus, certain measures under our final
policy remain required measures for the base score in the advancing
care information performance category. In addition to those measures
listed above, there are other measures such as Security Risk Analysis
that are essential to protecting patient privacy, which we believe
should be mandatory for reporting. We have addressed these comments
further with our final scoring methodology outlined in section
II.E.5.g.(6)(a) of this final rule with comment period. We have reduced
the total number of required measures from 11 in the base score as
proposed to only five in the final policy, which addresses some of the
concerns raised by commenters while meeting our statutory requirements,
as well as our commitment to patient privacy and access.
Comment: Many commenters requested that the distribution of points
for the base score and performance score of the advancing care
information performance category be reweighted. More than one commenter
suggested reducing the weight of the base score and increasing the
weight of the performance score over time. For example, some commenters
requested that the base be worth 40 percent and the performance be 60
percent of the points. Another commenter believed the base score should
initially be more heavily weighted, with the base score at 60 points,
Protect Patient Health Information score at 10 points, and performance
score at 80 points.
Response: Based on the overwhelming comments received, and our goal
to simplify the scoring methodology wherever possible, we agree with
commenters that the base and performance scores should be reconsidered
for the final policy. We have outlined the final scoring methodology in
section II.E.5.g.(6)(a) of this final rule with comment period, in
which the performance score is reweighted and the total possible score
for the advancing care information performance category is increased to
155 percent which would be capped at 100 percent when applied to the 25
possible points for the advancing care information performance category
in the MIPS final score.
Comment: Many commenters disliked that no credit is awarded if the
numerator for any measure is not at least one or the response is not
``yes'' for yes/no measures. Some commenters propose changing the
policy to allow MIPS eligible clinicians to earn a performance score
and bonus score even if they fail the base score. Others suggest
reducing the number of objectives to report to earn the base score. For
example, one commenter suggested requiring only the measures within the
following objectives to achieve the base score: Protect Patient Health
Information, Patient Electronic Access and Health Information Exchange.
Response: We appreciate the suggestions raised by commenters and
have taken these comments into account for our final policy discussed
in section II.E.5.g(6)(a) We note that for required measures in the
base score, we would still require a one in the numerator or a ``yes''
response to yes/no measures. Section 1848(o)(2)(A) of the Act includes
certain requirements that we have chosen to implement through three of
the measures in the base score (e-Prescribing, Send a Summary of Care
(formerly Patient Care Record Exchange) and Request/Accept Summary of
Care (formerly Patient Care Record), and thus, we believe these
measures should be required in order for a MIPS eligible clinician to
earn any score in the advancing care information performance category.
The other two required measures, Security Risk Analysis and Provide
Patient Access (formerly Patient Access) are of paramount importance to
CMS, and thus, we have maintained them as required measures in the base
score.
Comment: Many commenters support the emphasis on health information
exchange and patient engagement in both the base score and performance
score. Some commenters recommended an even more weight given to these
areas in the performance score.
Response: We appreciate this feedback. We agree that health
information exchange and coordination of care through patient
engagement are essential to improving the quality of care.
(b) Base Score
To earn points toward the base score, a MIPS eligible clinician
must report the numerator and denominator of certain measures specified
for the advancing care information performance category (see measure
specifications in section II.E.5.g.(7) (81 FR 28226 through 28228)),
which are based on the measures adopted by the EHR Incentive Programs
for Stage 3 in the 2015 EHR Incentive Programs final rule, to account
for 50 percent (out of a total 100 percent) of the advancing care
information performance category score. For measures that include a
percentage-based threshold for Stage 3 of the EHR Incentive Program, we
would not require those thresholds to be met for purposes of the
advancing care information performance category under MIPS, but would
instead require MIPS eligible clinicians to report the numerator (of at
least one) and denominator (or a yes/no statement for applicable
measures, which would be submitted together with data for the other
measures) for each measure being reported. We note that for any measure
requiring a yes/no statement, only a yes statement would qualify for
credit under the base score. Under the proposal, the base score of the
advancing care information performance category would incorporate the
objective and measures adopted by the EHR Incentive Programs with an
emphasis on privacy and security. We proposed two variations of a
scoring methodology for the base score, a primary and an alternate
proposal, which are outlined below. Both proposals would require the
MIPS eligible clinician to meet the requirement to protect patient
health information created or maintained by CEHRT to earn any score
within the advancing care information performance category; failure to
do so would result in a base score of zero, a performance score of zero
(discussed in section II.E.5.g of the proposed rule (81 FR 28221), and
an advancing care
[[Page 77218]]
information performance category score of zero.
The primary proposal at section II.E.5.g.(6)(b)(ii) of the proposed
rule (81 FR 28221) would require a MIPS eligible clinician to report
the numerator (of at least one) and denominator or yes/no statement
(only a yes statement would qualify for credit under the base score)
for a subset of measures adopted by the EHR Incentive Program for EPs
in the 2015 EHR Incentive Programs final rule. In an effort to
streamline and simplify the reporting requirements under the MIPS, and
reduce reporting burden on MIPS eligible clinicians, we proposed that
two objectives (Clinical Decision Support and Computerized Provider
Order Entry) and their associated measures would not be required for
reporting the advancing care information performance category. Given
the consistently high performance on these two objectives in the EHR
Incentive Program with EPs accomplishing a median score of over 90
percent for the last 3 years, we stated our belief that these
objectives and measures are no longer an effective measure of EHR
performance and use. In addition, we do not believe these objectives
and associated measures contribute to the goals of patient engagement
and interoperability, and thus, we believe these objectives can be
removed in an effort to reduce reporting burden without negatively
impacting the goals of the advancing care information performance
category. We note that the removed objectives and associated measures
would still be required as part of ONC's functionality standards for
CEHRT, however, MIPS eligible clinicians would not be required to
report the numerator and denominator or yes/no statement for those
measures. In the 2015 EHR Incentive Programs final rule we also
established that, for measures that were removed, the technology
requirements would still be a part of the definition of CEHRT. For
example, in that final rule, the Stage 1 Objective to Record
Demographics was removed, but the technology and standard for this
function in the EHR were still required (80 FR 62784). This means that
the MIPS eligible clinician would still be required to have these
functions as a part of their CEHRT.
The alternate proposal at section II.E.5.g.(6)(b)(iii) of the
proposed rule (81 FR 28222) would require a MIPS eligible clinician to
report the numerator (of at least one) and denominator or yes/no
statement (only a yes statement would qualify for credit under the base
score) for all objectives and measures adopted for Stage 3 in the 2015
EHR Incentive Programs final rule to earn the base score portion of the
advancing care information performance category, which would include
reporting a yes/no statement for CDS and a numerator and denominator
for CPOE objectives. We included these objectives in the alternate
proposal as MIPS eligible clinicians may believe the continued
measurement of these objectives is valuable to the continued use of
CEHRT as this would maintain the previously established objectives
under the EHR Incentive Program.
We stated our belief that both proposed approaches to the base
score are consistent with the statutory requirements under HITECH and
previously established CEHRT requirements as we transition to MIPS. We
also believe both approaches, in conjunction with the advancing care
information performance score, recognize the need for greater
flexibility in scoring CEHRT use across different clinician types and
practice settings by allowing MIPS eligible clinicians to focus on the
objectives and measures most applicable to their practice.
Comment: Several commenters were disappointed that our proposals
for the base score are so similar to the current meaningful use
requirements. They requested a more streamlined approach as they
believe the statute intended. Another commenter believed that advancing
care information performance category should reflect a MIPS eligible
clinician's use of digital clinical data to inform patient care and
encourage bi-directional data interoperability.
Response: While we did draw on the meaningful use foundation in
drafting the requirements for the advancing care information
performance category, our proposals have lessened those requirements
and provided additional flexibility as compared with all stages of the
EHR Incentive Programs. We note that we have made significant revisions
to the scoring methodology and reporting requirements in our final
policy discussed in section II.E.5.g.(6)(a) in response to these
comments. We would also welcome concrete proposals for new measures as
we move forward with EHR reporting requirements under the MIPS. We are
eager to improve interoperability and would welcome suggestions for
improvement.
Comment: We received many comments on the allocation of points in
the base score. Some commenters asked CMS to simplify the base score
calculation and weight the base score higher. Alternatively commenters
recommended that CMS reweight the base score to 75 percent of the total
advancing care information performance category. Other commenters
recommended that increasing the weight of the base score only occur if
CMS also moves away from the pass-fail approach to scoring this
section. Others suggested removing the base component of the scoring
methodology, and instead just have a set amount of points that it is
possible to achieve for each measure.
In regard to the base score calculation, most commenters requested
that we remove the all-or-nothing scoring of the base score. Some asked
that CMS give clinicians the option to report on a subset of measures
to satisfy the base score. Many requested partial credit. Some
commenters expressed concern that not reporting at least a numerator of
one for the base measures will result in a score or zero for the entire
category. A commenter proposed reporting a zero numerator or
denominator on a measure would satisfy successfully submitting data,
and thus, the clinician should achieve full points for the base score.
Another recommended CMS grant credit for each reported measure under
the base score and make clear that a physician will not fail the entire
advancing care category if they fail to report all base score measures.
Commenters also suggested giving full credit in the advancing care
information performance category if a MIPS eligible clinician attests
to using technology certified to the 2014 or 2015 Edition for MIPS year
1, and 75 percent credit toward advancing care information performance
category for subsequent years. Another asked that 50 percent in the
base score be awarded to clinicians that implemented CEHRT for at least
90 days of the performance period to ease newer users into EHR. While
most requested less stringent requirements, some thought that it is too
easy to achieve the 50 percent base score. Others believed the ``one
patient threshold'' for advancing care information performance category
reporting for all measures in the base score is far too low.
Response: We have taken commenters' feedback into consideration as
we have constructed our final policy as outlined in section
II.E.5.g.(6)(a) of this final rule with comment period. While we
appreciate commenters concerns about low thresholds, we believe that
the reporting requirements we set (a one in the numerator for
numerator/denominator measures, and a ``yes' for yes/no measures) are
appropriate as we transition to the MIPS. We note the definition of
MIPS eligible clinician includes many practitioners that were not
eligible under the EHR Incentive Programs and thus have little to no
[[Page 77219]]
experience with the objectives and measures. While the reporting
requirements are lower than the thresholds established for Modified
Stage 2 and Stage 3 of the EHR Incentive Programs, we believe they are
appropriate for the first performance period of MIPS. Further we have
tried to limit the composition of the base score so that MIPS eligible
clinicians can distinguish themselves through reporting on the
performance score measures. We are finalizing additional flexibilities
to address the concern about an all-or-nothing approach and reduced the
number of required measures from 11 in the proposed base score to five
in our final policy. We note that certain measures which implement
statutory requirements or that we consider high priority to protect
patient privacy and access are required for reporting. MIPS eligible
clinicians are required to report on all five of the required measures
in the base score in order to earn any points in the advancing care
information performance category. Considering this significant
reduction in the number of required measures for the base score, we do
not believe it is appropriate to increase the weight of the base score
as some commenters suggested and will keep it at 50 percent in our
final scoring methodology.
We are finalizing our policy that a MIPS eligible clinician must
report either a one in the numerator for numerator/denominator
measures, or a ``yes'' response for yes/no measures in order to earn
points in the base score, and a MIPS eligible clinician must report all
required measures in the base score in order to earn a score in the
advancing care information performance category. We note that the
remainder of a MIPS eligible clinician's score will be based on
performance and/or meeting the requirements to earn a bonus score for
Public Health and Clinical Data Registry Reporting or improvement
activities as described in section II.E.5.g.(7)(b) and II.E.5.g.(2)(b)
of this final rule with comment period.
(i) Privacy and Security; Protect Patient Health Information
In the 2015 EHR Incentive Programs final rule (80 FR 62832), we
finalized the Protect Patient Health Information objective and its
associated measure for Stage 3, which requires EPs to protect
electronic protected health information (ePHI, as defined in 45 CFR
160.103) created or maintained by the CEHRT through the implementation
of appropriate technical, administrative, and physical safeguards. As
privacy and security is of paramount importance and applicable across
all objectives, the Protect Patient Health Information objective and
measure would be an overarching requirement for the base score under
both the primary proposal and alternate proposal, and therefore would
be an overarching requirement for the advancing care information
performance category. We proposed that a MIPS eligible clinician must
meet this objective and measure to earn any score within the advancing
care information performance category. Failure to do so would result in
a base score of zero under either the primary proposal or alternate
outlined proposal, as well as a performance score of zero (discussed in
section II.E.5.g. of the proposed rule (81 FR 28215) and an advancing
care information performance category score of zero.
The following is a summary of the comments we received regarding
our proposal to require that a MIPS eligible clinician must meet the
Protect Patient Health Information objective and measure to earn any
score within the advancing care information performance category.
Comment: Many commenters supported the proposal requiring the
Protect Patient Health Information objective and measure in order to
receive the full base score and any performance score in the advancing
care information performance category.
Response: We agree as we continue to believe that there are many
benefits of safeguarding ePHI. Unintended and/or unlawful disclosures
of ePHI puts EHRs, interoperability and health information exchange at
risk. It is paramount that ePHI is properly protected and secured and
we believe that requiring this objective and measure remains
fundamental to this goal.
Comment: A few commenters expressed uncertainty about the
effectiveness of the Protect Patient Health Information objective and
measure in ensuring the security and privacy of patient health
information, as well as maintaining doctor-patient confidentiality.
Response: We understand that in some cases this measure may not be
enough to protect data as data breaches become more sophisticated.
However we continue to believe that widespread performance of security
risk analyses on a regular basis remains an important component of
protecting ePHI. The measure is a foundation of protection and we
expect that individuals and entities subject to HIPAA will also be
meeting the requirements of HIPAA.
Comment: Some commenters believed that reporting the Protect
Patient Health Information objective and measure is redundant and
burdensome, as the security risk analysis and other privacy and
security areas are already included under HIPAA requirements.
Response: Yes, we agree that a security risk analysis is included
in the HIPAA rules. However, it is our experience that some EPs are not
fulfilling this requirement under the EHR Incentive Programs. To
reinforce its importance, we are including it as a requirement for MIPS
eligible clinicians.
Comment: Some commenters expressed concern that meeting the Protect
Patient Health Information objective and measure requirements presents
a burden to small group practices, practices in rural settings, new
adopters of CEHRT and some MIPS eligible clinicians who experience
varying hardships.
Response: We disagree. The HIPAA Privacy and Security Rules, which
are more comprehensive than the Advancing Care Information measure and
with which certain entities must also comply, have been effective for
over 10 years. In addition, the Department of Health and Human Services
has produced a security risk assessment tool designed for use by small
and medium sized providers and clinicians available at https://www.healthit.gov/providers-professionals/security-risk-assessment and
also http://www.hhs.gov/hipaa/for-professionals/security/index.html.
This tool should help providers and clinicians with compliance and
additional resources are also available at http://www.hhs.gov/hipaa/for-professionals/security/guidance/index.html. We understand that
there are many sources of education available in the commercial market
regarding HIPAA compliance.
Comment: Many commenters stated that EHR use could jeopardize
patient confidentiality because personal information can be stolen.
Some stated that EHRs are a violation of privacy. Others do not want
their medical information accessible to the government or third party
vendors. Several stated that the proposed rule is contrary to the HIPAA
regulations.
Response: We agree that it is important to address the unique risks
and challenges that EHRs may present. We maintain that a focus on the
protection of ePHI is necessary for all clinicians. We also note that a
security risk analysis is required under the HIPAA regulations (45 CFR
164.308(a)(1)).
Comment: A few commenters offered suggestions to modify the Protect
Patient Health objective and measure, such as aligning the architecture
of CEHRT with the Hippocratic Oath or
[[Page 77220]]
working with Office for Civil Rights (OCR) or the Office of the
Inspector General (OIG) to develop additional guidance to physicians
regarding privacy practices.
Response: We appreciate this feedback. We will continue to work
with the OCR and ONC to develop and refine guidance.
We are finalizing the requirement that a MIPS eligible clinician
must meet the Protect Patient Health Information objective and measure
in order to earn any score within the advancing care information
performance category.
(ii) Advancing Care Information Performance Category Base Score Primary
Proposal
In the 2015 EHR Incentive Programs final rule (80 FR 62829-62871),
we finalized certain objectives and measures EPs would report to
demonstrate meaningful use of CEHRT for Stage 3. Under our proposal for
the base score of the advancing care information performance category,
MIPS eligible clinicians would be required to submit the numerator (of
at least one) and denominator, or yes/no statement as appropriate (only
a yes statement would qualify for credit under the base score), for
each measure within a subset of objectives (Electronic Prescribing,
Patient Electronic Access to Health Information, Care of Coordination
Through Patient Engagement, Health Information Exchange, and Public
Health and Clinical Data Registry Reporting) adopted in the 2015 EHR
Incentive Programs final rule for Stage 3 to account for the base score
of 50 percent of the advancing care information performance category
score. Successfully submitting a numerator and denominator or yes/no
statement for each measure of each objective would earn a base score of
50 percent for the advancing care information performance category. As
proposed in the proposed rule, failure to meet the submission criteria
(numerator/denominator or yes/no statement as applicable) and measure
specifications (81 FR 28226 through 28230) for any measure in any of
the objectives would result in a score of zero for the advancing care
information performance category base score, a performance score of
zero (discussed in section II.E.5.g. of the proposed rule 81 FR 28215)
and an advancing care information performance category score of zero.
For the Public Health and Clinical Data Registry Reporting
objective there is no numerator and denominator to measure; rather, the
measure is a ``yes/no'' statement of whether the MIPS eligible
clinician has completed the measure, noting that only a yes statement
would qualify for credit under the base score. Therefore we proposed
that MIPS eligible clinicians would include a yes/no statement in lieu
of the numerator/denominator statement within their submission for the
advancing care information performance category for the Public Health
and Clinical Data Registry Reporting objective. We further proposed
that, to earn points in the base score, a MIPS eligible clinician would
only need to complete submission on the Immunization Registry Reporting
measure of this objective. Completing any additional measures under
this objective would earn one additional bonus point in the advancing
care information performance category score. For further information on
this proposed objective, we direct readers to 81 FR 28230.
(iii) Advancing Care Information Performance Category Base Score
Alternate Proposal
Under our alternate proposal for the base score of the advancing
care information performance category, a MIPS eligible clinician would
be required to submit the numerator (of at least one) and denominator,
or yes/no statement as appropriate, for each measure, for all
objectives and measures for Stage 3 in the 2015 EHR Incentives Program
final rule (80 FR 62829-62871) as outlined in Table 7 of the proposed
rule (81 FR 28223). Successfully submitting a numerator and denominator
for each measure of each objective would earn a base score of 50
percent for the advancing care information performance category.
Failure to meet the submission requirements, or measure specifications
for any measure in any of the objectives would result in a score of
zero for the advancing care information performance category base
score, a performance score of 0 (discussed in section II.E.5.g. of the
proposed rule), and an advancing care information performance category
score of 0.
We proposed the same approach in the alternate proposal for the
Public Health and Clinical Data Registry Reporting objective as for the
primary outlined proposal. We direct readers to 81 FR 28226 through
28230 for further details on the individual objectives and measures.
The following is a summary of the comments we received regarding
our base score primary and alternate proposals which differ based on
whether reporting the CDS and CPOE objectives would be required.
Comment: Most commenters support the adoption of the base score
primary proposal, which eliminates the objectives and associated
measures for CPOE and CDS and agreed that most MIPS eligible clinicians
already use CPOE and CDS and do very well on those measures. Several
noted that measures require additional data entry and the pop-up alerts
interfere with clinical workflow, and thus, removal of these measures
could improve clinical workflow in the EHR.
Response: We agree and appreciate the support of these commenters.
As we have done previously under the EHR Incentive Programs we will
continue to monitor performance on objectives and measures and plan to
propose to refine measures and add new measures in future years.
Comment: Since CPOE and CDS continue to be valuable to practices,
many commenters support the alternate proposal to require the CPOE and
CDS objectives in the base score for the advancing care information
performance category. One commenter stated that maintaining these two
objectives offers an opportunity for the development of important
measures for specialists, including anesthesia-focused measures.
Another commenter suggested including the CPOE objective in for the
performance score of the advancing care information performance
category to give more flexibility and offer an opportunity to MIPS
eligible clinicians to earn more points, especially for those MIPS
eligible clinicians who will be using an EHR technology certified to
the 2014 Edition in 2017.
Response: While we agree that CPOE and CDS are valuable, we
continue to believe that it is important to streamline and simplify the
reporting requirements under MIPS. We note that the functionality
supporting these objectives will continue to be required as part of
CEHRT requirements.
Comment: One commenter urged CMS to clarify that even if the
reporting of CPOE and CDS measures is eliminated under the primary
proposal base score of the advancing care information performance
category, MIPS eligible clinicians who utilize CPOE are still expected
to utilize appropriately credentialed clinical staff to enter the
orders and those who utilize CDS must have the required functionality
turned on to receive credit in the advancing care information
performance category base score.
Response: As for the functionality, even if the CPOE and CDS
objectives and measures are not included for reporting under the
advancing care information performance category, it is
[[Page 77221]]
still expected that MIPS eligible clinicians will continue to have the
functionality enabled as a part of CEHRT.
Comment: Some commenters recommended retaining the CPOE and CDS
objectives and associated measures, noting that while the two
functionalities are widely adopted by those who were already
participating in the Medicare and Medicaid EHR Incentive Programs, MIPS
eligible clinicians include practitioners who were not eligible for
those programs, many of whom have not yet adopted the functionalities
and activities required for those objectives. Some commenters asked
that, if retaining the CPOE objective and associated measures, that CMS
include the low volume threshold exclusions.
Response: While we appreciate these concerns, we continue to
believe that it is important to streamline and simplify the reporting
requirements under MIPS. Practitioners who are not eligible to
participate in the EHR Incentive Programs but are MIPS eligible
clinicians will be subject to many new requirements and will have a
considerable amount of learning to do in their initial years of the
program, thus we do not believe it is necessary to add more to that
list of requirements and also increase the reporting burden for
clinicians with more experience using EHR who have historically had
high performance on these measures in the past under the EHR Incentive
Program. We note that the functionality supporting these objectives
will continue to be required as part of certification requirements and
available to new adopters of EHR technology.
Comment: One commenter expressed skepticism about the applicability
of the objectives with special emphasis in the base score to
specialists. For example, the commenter expressed concern that many
anesthesiologists may have difficulty attesting to the Patient
Electronic Access, Coordination of Care Through Patient Engagement and
Health Information Exchange objectives. They suggested developing
equally valuable substitute measures and objectives that focus on the
use of CEHRT by specialists and MIPS eligible clinicians who work in
settings that vary from traditional office-based practices.
Response: We understand that the practice settings of MIPS eligible
clinicians vary and that meeting the proposed objectives and measures
may require different levels of effort. We will consider the
development of objectives and measures for specialists and other
clinicians who do not work in office settings in future rulemaking.
Comment: We received many suggested changes to the measures
included in our primary proposal. Some requested that we allow MIPS
eligible clinicians to choose which measures are most relevant to their
practice. Others recommended that the base score be streamlined and
focus on three critical objectives of meaningful use: Protection of
personal health information, patient electronic access to his/her
health information, and health information exchange. Some commenters
recommended including the smallest set of objectives in the base score
required by statute and including any additional objectives in the
performance score category.
Response: We appreciate the many suggested changes to measures and
measure reporting requirements and will take them into consideration in
this and future rules. We are also conscious of the need to balance
complexity or reporting requirements with reporting goals. In our final
policy, we have restructured our base score to reduce reporting burden,
and limited the required measures keeping only those measures that
implement certain requirements under section 1848(o)(2)(A) of the Act,
which include e-Prescribing and two of the measures under the Health
Information Exchange objective; as well as Security Risk Analysis,
which we have previously stated is of paramount importance to
protecting patient privacy; and Provide Patient Access which is
critical to increasing patient engagement and allowing patients access
to their personal health data. We note that this reduction of measures
is responsive to the comments we received requesting that we move away
from the all-or-nothing scoring methodology in the proposed base score.
While we believe all measures under the advancing care information
performance category are of upmost importance, we acknowledge that we
must balance the need for these data with data collection and reporting
burden. We refer readers to section II.E.5.g.(6)(a) for more discussion
of our final scoring policy.
After consideration of the comments, we are finalizing our primary
proposal with modifications described in section II.E.5.g.(6)(a) for
the base score. This proposal does not require the reporting of the
objectives and measures for CDS and CPOE. We note that the
functionalities required for these objectives and associated measures
are still required as part of ONC's certification criteria for CEHRT.
The following is a summary of the comments we received related to
the bonus for Public Health and Clinical Data Registry Reporting.
Comment: The majority of commenters recommended that more bonus
credit should be awarded to MIPS eligible clinicians for reporting to
additional registries by either increasing the bonus to 5 or 10 percent
or by offering a bonus for each additional registry to which the MIPS
eligible clinician reports. One commenter specifically expressed
concern that only awarding 1 percent downplays the importance and
benefit of submitting data to multiple registries. Many commenters
supported the proposal that Immunization Registry Reporting should be
the only registry required for the base score, but encouraged CMS to
provide more than 1 percent as a bonus for additional registry
reporting. Another suggested that for CY 2017, CMS require two public
health reporting measures in the Public Health and Clinical Data
Registry Reporting objective for the base score, including mandatory
reporting to immunization registries and any of the optional public
health measures.
Response: The Public Health and Clinical Data Registry reporting
objective focuses on the importance of the ongoing lines of
communication that should exist between MIPS eligible clinicians and
public health agencies and clinical data registries thus, we agree that
a larger bonus should be awarded for reporting to additional registries
under the Public Health and Clinical Data Registry Reporting objective.
These registries play an important part in monitoring the health status
of patients across the country and some, for example syndromic
surveillance registries, help in the early detection of outbreaks which
is critical to public health overall.
After consideration of the comments we received, and for the
reasons mentioned above, we are increasing the bonus score to 5 percent
in the advancing care information performance category score for
reporting to one or more public health or clinical data registries
beyond the Immunization Registry Reporting measure. We note that in our
effort to reduce the number of required measures in the base score and
simplify reporting requirements, the Immunization Registry Reporting
measure is no longer required as part of the base score, however MIPS
eligible clinicians can earn 10 percent in the performance score for
reporting this measure. Additionally, if the MIPS eligible clinician
reports to one or more additional registries under the Public Health
and Clinical Data Registry Reporting objective, they will earn the 5
[[Page 77222]]
percent bonus score. We note that the bonus is only available to MIPS
eligible clinicians who earn a base score.
(iv) 2017 Advancing Care Information Transition Objectives and Measures
(Referred to in the Proposed Rule as Modified Stage 2)
In the 2015 EHR Incentive Programs final rule (80 FR 62772), we
streamlined reporting for EPs by adopting a single set of objectives
and measures for EPs regardless of their prior stage of participation.
This was the first step in synchronizing the objectives and eliminating
the separate stages of meaningful use in the EHR Incentive Program. In
doing so, we also sought to provide some flexibility and to allow
adequate time for EPs to move toward the more advanced use of EHR
technology. This flexibility included alternate exclusions and
specifications for EPs scheduled to demonstrate Stage 1 in 2015 and
2016 (80 FR 62788) and allowed clinicians to select either the Modified
Stage 2 Objectives or the Stage 3 Objectives in 2017 (80 FR 62772) with
all EPs moving to the Stage 3 Objectives in 2018. We note that in
section II.E.5.g (81 FR 28218 and 28219) of the proposed rule, we
proposed the requirements for MIPS eligible clinicians using various
editions of CEHRT in 2017 as it relates to the objectives and measures
they select to report.
In connection with that proposal, and in an effort not to unfairly
burden MIPS eligible clinicians who are still utilizing EHR technology
certified to the 2014 Edition certification criteria in 2017, we
proposed at Sec. 414.1380(b)(4) modified primary and alternate
proposals for the base score for those MIPS eligible clinicians
utilizing EHR technology certified to the 2014 Edition. We note that
these modified proposals are the same as the primary and alternate
outlined proposals in regard to scoring and data submission, but vary
in the number of measures required under the Coordination of Care
Through Patient Engagement and Health Information Exchange objectives
as demonstrated in Table 8 of the proposed rule (81 FR 28224).
This approach allows MIPS eligible clinicians to continue moving
toward advanced use of CEHRT in 2018, but allows for flexibility in the
implementation of upgraded technology and in the selection of measures
for reporting in 2017.
The following is a summary of the comments we received regarding
the proposals for reporting on the Modified Stage 2 objectives and
measures for the advancing care information performance category in
2017. We note that in this final rule with comment period we will refer
to these measures as the 2017 Advancing Care Information Transition
objectives and measures instead of Modified Stage 2, which is a term
specific to the EHR Incentive Program.
Comment: Many commenters supported the proposal to allow MIPS
eligible clinicians to report on the 2017 Advancing Care Information
Transition objectives and measures in the 2017 performance period to
meet the requirements of the advancing care information performance
category. They stated that this approach offers flexibility to MIPS
eligible clinicians who do not yet use a 2015 Edition CEHRT.
Response: We agree. We are aware that in 2017 many MIPS eligible
clinicians might not yet have access to EHR technology certified to the
2015 Edition. Therefore, to accommodate these MIPS eligible clinicians
we will allow the option for them to report for the 2017 performance
period using EHR technology certified to the 2014 Edition or a
combination of both 2014 and 2015 Editions.
Comment: A majority of commenters suggested retaining 2017
Advancing Care Information Transition objectives and measures beyond
performance periods in 2017, citing vendor, as well as clinician
readiness with implementing and using EHR technology certified to the
2015 Edition in time for the 2018 performance period. Additionally,
some commenters believed that the 2017 Advancing Care Information
Transition reporting requirements are less stringent, and therefore,
more feasible for MIPS eligible clinicians to achieve, resulting in
more MIPS eligible clinician success in the advancing care information
performance category. One commenter suggested continuing to allow the
reporting of 2017 Advancing Care Information Transition objectives and
not requiring the reporting of Advancing Care Information objectives
until a performance period in 2019.
Response: For the majority of measures in the EHR Incentive
Programs, the difference between the Modified Stage 2 measures and the
Stage 3 measures is the threshold required to successfully demonstrate
meaningful use. For the advancing care information performance
category, there are no thresholds and MIPS eligible clinicians are
allowed to select the objectives and measures most applicable to their
practice for reporting purposes. For this reason, we disagree that
either set of measures for the advancing care information performance
category is more stringent than the other. While we understand the
commenters' concerns about readiness for subsequent years as it relates
to adopting new technologies, we continue to believe that it is
important to move forward with a single set objectives and measures
focused on the top priorities of clinical effectiveness, patient
engagement and health information exchange. We further maintain our
belief that it reduces complexity and burden to have all MIPS eligible
clinicians reporting on the same set of objectives and measures and the
same specifications for those measures. We note that we will accept a
minimum of 90 consecutive days of data within the CY 2018 performance
period for the advancing care information performance category in order
to support MIPS eligible clinicians and groups transitioning to
technology certified to the 2015 Edition for use in 2018. At this time,
we believe it is appropriate to require the use of EHR technology
certified to the 2015 Edition for the CY 2018 performance period and
encourage MIPS eligible clinicians to work with their EHR vendors in
the coming months to prepare for the transition to 2015 Edition CEHRT.
Comment: A few commenters requested clarification of the objectives
and measures to use for performance periods in CY 2017 if the MIPS
eligible clinician uses a combination of technologies certified to the
2014 and 2015 Editions during the performance period. The commenters
anticipate that many practices could begin the performance period using
2014 Edition and upgrade during the performance period to begin use of
2015 Edition. Others expect that MIPS eligible clinicians may use a
combination of 2014 and 2015 Editions during the performance period.
Commenters also requested clarification on how MIPS eligible clinicians
will be scored if the objectives and measures to which they report only
apply to part of the performance period and not the full calendar year.
Response: In 2017, a MIPS eligible clinician who has technology
certified to a combination of 2015 Edition and 2014 Edition may choose
to report on either the Advancing Care Information objectives and
measures specified for the advancing care information performance
category in section II.E.5.g.(7) of this final rule or the 2017
Advancing Care Information Transition objectives and measures specified
for the advancing care information performance category as described in
section II.E.5.g.(7) of this final rule if they have the appropriate
mix of technologies to support each measure
[[Page 77223]]
selected. If a MIPS eligible clinician switches from 2014 Edition to
2015 Edition CEHRT during the performance period, the data collected
for the base and performance score measures should be combined from
both the 2014 and 2015 Edition of CEHRT.
After consideration of the comments we received, we are finalizing
our proposal as proposed. We note that because we will accept a minimum
of 90 consecutive days of data from the CY 2017 performance period,
MIPS eligible clinicians who have EHR technology certified to the 2014
Edition and then transition to EHR technology certified to the 2015
Edition in 2017 have flexibility and may select which measures they
want to report on for the 2017 performance period.
(c) Performance Score
In addition to the base score, which includes submitting each of
the objectives and measures to achieve 50 percent of the possible
points within the advancing care information performance category, we
proposed to allow multiple paths to achieve a score greater than the 50
percentage base score. The performance score is based on the priority
goals established by us to focus on leveraging CEHRT to support the
coordination of care. A MIPS eligible clinician would earn additional
points above the base score for performance in the objectives and
measures for Patient Electronic Access, Coordination of Care through
Patient Engagement, and Health Information Exchange. These measures
have a focus on patient engagement, electronic access and information
exchange, which promote healthy behaviors by patients and lay the
ground work for interoperability. These measures also have significant
opportunity for improvement among MIPS eligible clinicians and the
industry as a whole based on adoption and performance data. We believe
this approach for achievement above a base score in the advancing care
information performance category would provide MIPS eligible clinicians
a flexible and realistic incentive towards the adoption and use of
CEHRT.
We proposed at Sec. 414.1380(b)(4) that, for the performance
score, the eight associated measures under these three objectives would
each be assigned a total of 10 possible points. For each measure, a
MIPS eligible clinician may earn up to 10 percent of their performance
score based on their performance rate for the given measure. For
example, a performance rate of 95 percent on a given measure would earn
9.5 percentage points of the performance score for the advancing care
information performance category. This scoring approach is consistent
with the performance score approach outlined for other MIPS categories
in the proposed rule. Table 9 of the proposed rule (81 FR 28225),
provided an example of the proposed performance score methodology.
We noted that in this methodology, a MIPS eligible clinician has
the potential to earn a performance score of up to 80 percent, which,
in combination with the base score would be greater than the total
possible 100 percent for the advancing care information performance
category. We stated that this methodology would allow flexibility for
MIPS eligible clinicians to focus on measures which are most relevant
to their practice to achieve the maximum performance category score,
while deemphasizing concentration in other measures which are not
relevant to their practice.
This proposed methodology recognizes the importance of promoting
health IT adoption and standards and the use of CEHRT to support
quality improvement, interoperability, and patient engagement. We
invited comments on our proposal.
The following is a summary of the comments we received regarding
our proposal.
Comment: A few commenters suggested removing the base score and
instead scoring MIPS eligible clinicians solely on performance for the
following measures: (1) Patient Electronic Access; (2) Electronic
Prescribing; (3) Computer Provider-Order Entry; (4) Patient-Specific
Education; (5) View, Download, Transmit; (6) Secure Messaging; (7)
Patient-Generated Health Data; (8) Patient Care Record Exchange; (9)
Request/Accept Patient Care Record; and (10) Clinical Information
Reconciliation. Others requested that the patient engagement measures,
View, Download or Transmit, Secure Messaging, and Patient-Generated
Health Data be voluntary in order to provide flexibility.
Response: We appreciate the feedback and have significantly reduced
the number of required measures in the base score which adds both
flexibility and simplicity to the scoring methodology while addressing
statutory requirements. We refer readers to section II.E.5.g.(6)(b) of
this final rule with comment period for further discussion of our final
policy.
Comment: A commenter suggested that the performance score measures
should reflect the patient population because many MIPS eligible
clinicians treat patients that are poor, elderly, or have limited
English proficiency, and suggested that these factors strongly
disadvantage MIPS eligible clinicians on measures as compared to MIPS
eligible clinicians whose patient populations are better educated and
better off financially. Another suggested the advancing care
information performance category be renamed Health IT-related
activities score and reflect the improvement activities performance
category such that MIPS eligible clinicians select activities from a
long list.
Response: While we understand that the demographics and education-
level of patient populations of MIPS eligible clinicians may vary, we
disagree that measures in the advancing care information performance
category should be adjusted to accommodate for different patient
populations. We believe MIPS eligible clinicians who have CEHRT have
the ability to adequately use CEHRT to perform the actions required for
the measures, regardless of their patient population. We also believe
we have offered enough flexibility for MIPS eligible clinicians who are
concerned about patient action requirements by not establishing measure
thresholds and instead requiring a minimum of one in the numerator for
numerator/denominator measures. We direct readers to the discussion of
the advancing care information performance category scoring in section
II.E.5.g.(6)(a) of this final rule with comment period. We look forward
to continuing to refine the advancing care information performance
category over time.
(d) Overall Advancing Care Information Performance Category Score
To determine the MIPS eligible clinician's overall advancing care
information performance category score, we proposed to use the sum of
the base score, performance score, and the potential Public Health and
Clinical Data Registry Reporting bonus point. We note that if the sum
of the MIPS eligible profession's base score (50 percent) and
performance score (out of a possible 80 percent) with the Public Health
and Clinical Data Registry Reporting bonus point are greater than 100
percent, we would apply an advancing care information performance
category score of 100 percent. For example, if the MIPS eligible
clinician earned the base score of 50 percent, a performance score of
60 percent and the bonus point for Public Health and Clinical Data
Registry Reporting for a total of 111 percent, the MIPS eligible
clinician's overall advancing care information performance category
score would be 100 percent. The total percentage score (out of 100)
[[Page 77224]]
for the advancing care information performance category would then be
multiplied by the weight (25 percent) of the advancing care information
performance category and incorporated into the MIPS final score, as
described at 81 FR 28220 through 28271 of the proposed rule. Table 10
of the proposed rule (81 FR 28226) provides an example of the
calculation of the advancing care information performance category
score based on these proposals. For our final policy, we revised the
proposed scoring approach by reducing the number of required measures
in the base score and adding measures to the performance score in an
effort to address commenters' concerns (as described above) and add
flexibility wherever possible. The base score and performance score are
added together, along with any additional bonus score if applicable, to
determine the overall advancing care information performance category
score.
Under the final policy, a MIPS eligible clinician must report all
required measures of the base score to earn any base score, and thus to
earn any score in the advancing care information performance category.
We understand that many commenters preferred that we do away entirely
with the all-or-nothing approach to the base score and we have made
adjustments to the base score to be responsive to those commenters'
concerns. We note that section 1848(o)(2)(A) of the Act includes
certain requirements that we have chosen to implement through certain
measures such as e-Prescribing, Send a Summary of Care and Request/
Accept Summary, and thus, we continue to require these measures in the
advancing care information performance category base score. In
addition, we have maintained the Security Risk Analysis measure as a
required measure as we believe it is essential to protecting patient
privacy as discussed in the proposed rule (81 FR 28221), and thus, we
believe should be mandatory for reporting. We have also maintained
Provide Patient Access as the fifth required measure under the base
score because we believe it is essential for patients to have access to
their health care information in order to improve health, provide
transparency and drive patient engagement. To address commenters'
concerns, we have reduced the total number of required measures in the
base score to only these five, and moved other measures to the
performance score where MIPS eligible clinicians can choose which
measures to report based on their individual practice. While we believe
all measures under the advancing care information performance category
are of upmost importance, we acknowledge that we must balance the need
for these data with data collection and reporting burden. Given the
considerable reduction in required measures, we do not believe it is
appropriate to increase the weight of the base score, and thus, it
remains at 50 percent of the advancing care information performance
category score.
The performance score builds upon the base score and is based on a
MIPS eligible clinician's performance rate for each measure reported
for the performance score (calculated using the numerator/denominator).
A performance rate of 1-10 percent would earn 1 percentage point, a
performance rate of 11-20 percent would earn 2 percentage points and so
on. For example, if the clinician reports a numerator/denominator of
85/100 for the Patient-Specific Education measure, their performance
rate would be 85 percent and they would earn 9 percentage points toward
their performance score for the advancing care information performance
category. With nine measures included in the performance score, a MIPS
eligible clinician has the ability to earn up to 90 percentage points
if they report all measures in the performance score.
We note that the measures under the Public Health and Clinical Data
Registry Reporting objective are yes/no measures and do not have a
numerator/denominator to calculate the performance rate. For the
Immunization Registry Reporting measure, we will award 0 or 10
percentage points for the performance score (0 percent for a ``no''
response, 10 percent for a ``yes'' response). Active engagement with a
public health or clinical data registry to meet any other measure
associated with the Public Health and Clinical Data Registry Reporting
objective will earn the MIPS eligible clinician a bonus of 5 percentage
points as outlined in section II.E.5.g.(6)(b)f this final rule with
comment period. MIPS eligible clinicians are not required to report the
Immunization Registry Reporting measure in order to earn the bonus 5
percent for reporting to one or more additional registries.
Two of the measures in the base score are not included in the
performance score. The Security Risk Analysis and e-Prescribing
measures are required under the base score, but a MIPS eligible
clinician will not earn additional points under the performance score
for reporting these measures. Due to the critical nature of the
Security Risk Analysis measure, and as we stated in the proposed rule,
we believe this measure is of paramount importance and applicable
across all objectives. Therefore, the Protect Patient Health
Information objective and Security Risk Analysis measure are
foundational requirements for the advancing care information
performance category (81 FR 28221). For this reason, we are including
it as a required measure in the base score, but are not awarding any
additional score for performance. The e-Prescribing measure is one of
the measures that fulfills a statutory requirement under section
1848(o)(2)(A) of the Act, and thus, we are requiring it as part of the
base score. Given the historically high performance on this measure
under the EHR Incentive Program with EPs achieving an average of 87
percent of all permissible prescriptions written and transmitted
electronically using CEHRT in 2015, we are not including it in the
performance score for the advancing care information performance
category.
Under our final policy, MIPS eligible clinicians have the ability
to earn an overall score for the advancing care information performance
category of up to 155 percentage points, which will be capped at 100
percent when the base score, performance score and bonus score are all
added together. We believe this addresses commenters' requests for
additional opportunities to earn credit in all aspects of the advancing
care information performance category including the base score,
performance score and bonus score. In addition, we believe this scoring
approach adds flexibility for MIPS eligible clinicians to choose
measures that are most applicable to their practice and best represent
their performance. While certain measures are still required for
reporting, we have reduced this number from 11 required measures in the
proposed base score to only five in this final policy. We have also
increased the number of measures for which a MIPS eligible clinician
has the ability to earn performance score credit from eight measures in
the proposed performance score to nine in this final policy. We note
that MIPS eligible clinicians can choose which of these measures to
focus on for their performance score allowing clinicians to customize
their reporting and score.
[[Page 77225]]
Table 9--Advancing Care Information Performance Category Scoring Methodology Advancing Care Information Objectives and Measures
--------------------------------------------------------------------------------------------------------------------------------------------------------
Advancing care information Advancing care Required/ not required for Performance score (up to
objective information measure * base score (50%) 90%) Reporting requirement
--------------------------------------------------------------------------------------------------------------------------------------------------------
Protect Patient Health Information. Security Risk Required.................. 0......................... Yes/No Statement.
Analysis.
Electronic Prescribing............. e-Prescribing........ Required.................. 0......................... Numerator/Denominator.
Patient Electronic Access.......... Provide Patient Required.................. Up to 10.................. Numerator/Denominator.
Access.
Patient-Specific Not Required.............. Up to 10.................. Numerator/Denominator.
Education.
Coordination of Care Through View, Download, or Not Required.............. Up to 10.................. Numerator/Denominator.
Patient Engagement. Transmit (VDT).
Secure Messaging..... Not Required.............. Up to 10.................. Numerator/Denominator.
Patient-Generated Not Required.............. Up to 10.................. Numerator/Denominator.
Health Data.
Health Information Exchange........ Send a Summary of Required.................. Up to 10.................. Numerator/Denominator.
Care.
Request/Accept Required.................. Up to 10.................. Numerator/Denominator.
Summary of Care.
Clinical Information Not Required.............. Up to 10.................. Numerator/Denominator.
Reconciliation.
Public Health and Clinical Data Immunization Registry Not Required.............. 0 or 10................... Yes/No Statement.
Registry Reporting. Reporting.
Syndromic Not Required.............. Bonus..................... Yes/No Statement.
Surveillance
Reporting.
Electronic Case Not Required.............. Bonus..................... Yes/No Statement.
Reporting.
Public Health Not Required.............. Bonus..................... Yes/No Statement.
Registry Reporting.
Clinical Data Not Required.............. Bonus..................... Yes/No Statement.
Registry Reporting.
--------------------------------------------------------------------------------------------------------------------------------------------------------
Bonus (up to 15)
--------------------------------------------------------------------------------------------------------------------------------------------------------
Report to one or more additional public health and clinical data registries beyond the 5 bonus................... Yes/No Statement.
Immunization Registry Reporting measure.
Report improvement activities using CEHRT 10 bonus.................. Yes/No Statement.
--------------------------------------------------------------------------------------------------------------------------------------------------------
* Several measure names have been changed since the proposed rule. This table reflects those changes. We refer readers to section II.E.5.g.(7)(a) of
this final rule with comment period for further discussion of measure name changes.
Comment: In addition to the scoring comments we summarized in the
above sections, many commenters expressed concerns related to the
difference in scoring for the 2017 Advancing Care Information
Transition objectives and measures (referred to in the proposed rule as
the Modified Stage 2 Objectives and Measures). Commenters highlighted
that for the proposed policy, there are eight available measures in the
Advancing Care Information Objectives and Measures while there are only
six available measures in the 2017 Advancing Care Information
Transition objectives and measures for which MIPS eligible clinicians
can earn credit in the performance score of the advancing care
information performance category. Commenters believed this would pose a
disadvantage to those MIPS eligible clinicians with EHR technology
certified to the 2014 Edition who would only be able to report on 2017
Advancing Care Information Transition objectives and measures, and
consequently have a lesser opportunity to earn credit in the
performance score.
Response: We appreciate the comments and have outlined our final
scoring methodology for the 2017 Advancing Care Information Transition
objectives and measures in Table 10 to demonstrate that those MIPS
eligible clinicians reporting the 2017 Advancing Care Information
Transition objectives and measures will not be disadvantaged. MIPS
eligible clinicians will have the ability to earn up to 155 percentage
points for the advancing care information performance category, which
will be capped at 100 percent, regardless of which set of measures they
report. We note that in order to make up the difference in the number
of measures included in the performance score for the two measure sets,
we have increased the number of percentage points available for the
performance weight of the Provide Patient Access and Health Information
Exchange measures (up to 20 percent for each measure), as these
measures are critical to our goals of patient engagement and
interoperability.
Table 10--Advancing Care Information Performance Category Scoring Methodology for 2017 Advancing Care Information Transition--Objectives and Measures
--------------------------------------------------------------------------------------------------------------------------------------------------------
2017 Advancing care
2017 Advancing care information information transition Required/ not required for Performance score (up Reporting requirement
transition objective (2017 only) measure * (2017 only) base score (50%) to 90%)
--------------------------------------------------------------------------------------------------------------------------------------------------------
Protect Patient Health Information. Security Risk Analysis Required................... 0...................... Yes/No Statement.
Electronic Prescribing............. E-Prescribing......... Required................... 0...................... Numerator/Denominator.
Patient Electronic Access.......... Provide Patient Access Required................... Up to 20............... Numerator/Denominator.
View, Download, or Not Required............... Up to 10............... Numerator/Denominator.
Transmit (VDT).
Patient-Specific Education......... Patient-Specific Not Required............... Up to 10............... Numerator/Denominator.
Education.
Secure Messaging................... Secure Messaging...... Not Required............... Up to 10............... Numerator/Denominator.
Health Information Exchange........ Health Information Required................... Up to 20............... Numerator/Denominator.
Exchange.
Medication Reconciliation.......... Medication Not Required............... Up to 10............... Numerator/Denominator.
Reconciliation.
[[Page 77226]]
Public Health Reporting............ Immunization Registry Not Required............... 0 or 10................ Yes/No Statement.
Reporting. Not Required............... Bonus.................. Yes/No Statement.
Syndromic Surveillance Not Required............... Bonus.................. Yes/No Statement.
Reporting.
Specialized Registry
Reporting.
--------------------------------------------------------------------------------------------------------------------------------------------------------
Bonus up to 15%
--------------------------------------------------------------------------------------------------------------------------------------------------------
Report to one or more additional public health and clinical data registries beyond the 5% bonus............... Yes/No Statement.
Immunization Registry Reporting measure.
--------------------------------------------------------------------------------------------------------------------------------------------------------
Report improvement activities using CEHRT 10% bonus.............. Yes/No Statement.
--------------------------------------------------------------------------------------------------------------------------------------------------------
* Several measure names have been changed since the proposed rule. This table reflects those changes. We refer readers to section II.E.5.g.(7)(a) of
this final rule with comment period for further discussion of measure name changes.
We are seeking comment on our final scoring methodology policies,
and future enhancements to the methodology.
(e) Scoring Considerations
Section 1848(q)(5)(E)(ii) of the Act, as added by section 101(c) of
the MACRA, provides that in any year in which the Secretary estimates
that the proportion of EPs (as defined in section 1848(o)(5) of the
Act) who are meaningful EHR users (as determined under section
1848(o)(2) of the Act) is 75 percent or greater, the Secretary may
reduce the applicable percentage weight of the advancing care
information performance category in the MIPS final score, but not below
15 percent, and increase the weightings of the other performance
categories such that the total percentage points of the increase equals
the total percentage points of the reduction. We note section
1848(o)(5) of the Act defines an EP as a physician, as defined in
section 1861(r) of the Act. For purposes of applying section
1848(q)(5)(E)(ii) of the Act, we proposed to estimate the proportion of
physicians as defined in section 1861(r) who are meaningful EHR users
as those physician MIPS eligible clinicians who earn an advancing care
information performance category score of at least 75 percent under our
proposed scoring methodology for the advancing care information
performance category for a performance period. This would require the
MIPS eligible clinician to earn the advancing care information
performance category base score of 50 percent, and an advancing care
information performance score of at least 25 percent (or 24 percent
plus the Public Health and Clinical Data Registry Reporting bonus
point) for an overall performance category score of 75 percent for the
advancing care information performance category. We are alternatively
proposing to estimate the proportion of physicians as defined in
section 1861(r) who are meaningful EHR users as those physician MIPS
eligible clinicians who earn an advancing care information performance
category score of 50 percent (which would only require the MIPS
eligible clinician to earn the advancing care information performance
category base score) under our proposed scoring methodology for the
advancing care information performance category for a performance
period, and we solicited comments on both of these proposed thresholds.
We proposed to base this estimation on data from the relevant
performance period, if we have sufficient data available from that
period. For example, if feasible, we would consider whether to reduce
the applicable percentage weight of the advancing care information
performance category in the MIPS final score for the 2019 MIPS payment
year based on an estimation using the data from the 2017 performance
period. We noted that in section II.E.5.g.(8) of the proposed rule (81
FR 28231-28232) we proposed to reweight the advancing care information
performance category to zero for certain hospital-based physicians and
other physicians. These physicians meet the definition of MIPS eligible
clinicians, but would not be included in the estimation because the
advancing care information performance category would be weighted at
zero for them. We note that any adjustments of the performance category
weights specified in section 1848(q)(5)(E) of the Act based on this
policy would be established in future notice and comment rulemaking.
The following is a summary of the comments we received regarding
our proposed definition of meaningful EHR user.
Comment: Commenters overwhelmingly supported the proposal to define
meaningful EHR users as those MIPS eligible clinicians who earn a score
of 75 percent in the advancing care information performance category.
They believed that a lower score, such as 50 percent, would not be
stringent enough and that the majority of MIPS eligible clinicians
would achieve the meaningful EHR user status by simply reporting and
attesting to just one patient encounter for each measure. Additionally,
many commenters pointed out that this would result in a reduction of
the applicable weight of the advancing care information performance
category in the MIPS final score and would reduce the focus and
emphasis on increased patient engagement and health information
exchange.
Response: We appreciate this feedback and agree that 50 percent
would be a very low threshold to be considered a meaningful EHR user in
the advancing care information performance category.
Comment: A few commenters supported the alternate proposal to
define meaningful EHR users as those MIPS eligible clinicians who earn
a score of 50 percent in the advancing care information performance
category. This approach would only require MIPS eligible clinicians to
achieve the base score of 50 percent to achieve the meaningful EHR user
status. They cited the overall complexity of the reporting
requirements, as well as level of difficulty for small practices to
score well in the performance category.
Response: We understand the commenters' concerns regarding the
complexity of reporting requirements, and note that we have addressed
this through our final scoring policy outlined in section
II.E.5.g.(6)(d) of this final rule with comment period. We
[[Page 77227]]
believe the adjustments made in the scoring methodology address
commenters' concerns by reducing the requirements to earn the base
score, and thus, there is no need to lower the threshold for being
considered a meaningful EHR user.
Comment: One commenter requested that the definition of a
meaningful EHR user and the requirements to achieve this status in the
MIPS be further clarified in this rule stating that it is important to
clearly define expectations and set a higher standard in order to
achieve interoperability and EHR-aided improved health outcomes for
Medicare beneficiaries.
Response: We appreciate this feedback and reiterate that a
meaningful EHR user under this policy is a physician, as defined in
section 1861(r) of the Act who earns an advancing care information
performance category overall score of 75 percent per our primary
proposal outlined above. To earn a score of 75 percent in the advancing
care information performance category, a physician would need to
accomplish the base score, plus additional performance and/or bonus
score for a total of 75 percent or 18.75 performance category points as
they are applied to the MIPS final score.
After consideration of the comments we received, in combination
with our final scoring methodology and its impact on this policy, we
are finalizing as proposed our primary proposal for purposes of
applying section 1848(q)(5)(E)(ii) of the Act, to estimate the
proportion of physicians as defined in section 1861(r) of the Act who
are meaningful EHR users as those physician MIPS eligible clinicians
who earn an advancing care information performance category score of at
least 75 percent for a performance period. We will base this estimation
on data from the relevant performance period, if we have sufficient
data available from that period. We will not include in this estimation
physicians for whom the advancing care information performance category
is weighted at zero percent under section 1848(q)(5)(F) of the Act.
(7) Advancing Care Information Performance Category Objectives and
Measures Specifications
(a) Advancing Care Information Objectives and Measures Specifications
(Referred to in the Proposed Rule as MIPS Objectives and Measures)
We proposed the objectives and measures for the advancing care
information performance category of MIPS as outlined in the proposed
rule. We noted that these objectives and measures have been adapted
from the Stage 3 objectives and measures as finalized in the 2015 EHR
Incentive Programs final rule (80 FR 62829 through 62871), however, we
did not propose to maintain the previously established thresholds for
MIPS. Any additional changes to the objectives and measures were
outlined in the proposed rule. For a more detailed discussion of the
Stage 3 objectives and measures, including explanatory material and
defined terms, we refer readers to the 2015 EHR Incentive Programs
final rule (80 FR 62829 through 62871).
Objective: Protect Patient Health Information.
Objective: Protect electronic protected health information (ePHI)
created or maintained by the CEHRT through the implementation of
appropriate technical, administrative, and physical safeguards.
Security Risk Analysis Measure: Conduct or review a security risk
analysis in accordance with the requirements in 45 CFR 164.308(a)(1),
including addressing the security (to include encryption) of ePHI data
created or maintained by CEHRT in accordance with requirements in 45
CFR164.312(a)(2)(iv) and 45 CFR 164.306(d)(3), implement security
updates as necessary, and correct identified security deficiencies as
part of the MIPS eligible clinician's risk management process.
Objective: Electronic Prescribing.
Objective: Generate and transmit permissible prescriptions
electronically.
e-Prescribing Measure: At least one permissible prescription
written by the MIPS eligible clinician is queried for a drug formulary
and transmitted electronically using CEHRT.
Denominator: Number of prescriptions written for drugs
requiring a prescription in order to be dispensed other than controlled
substances during the performance period; or number of prescriptions
written for drugs requiring a prescription in order to be dispensed
during the performance period.
Numerator: The number of prescriptions in the denominator
generated, queried for a drug formulary, and transmitted electronically
using CEHRT.
For this objective, we note that the 2015 EHR Incentive Program
final rule included a discussion of controlled substances in the
context of the Stage 3 objective and measure (80 FR 62834), which we
understand from stakeholders has caused confusion. We therefore
proposed for both MIPS and for the EHR Incentive Programs that health
care providers would continue to have the option to include or not
include controlled substances that can be electronically prescribed in
the denominator. This means that MIPS eligible clinicians may choose to
include controlled substances in the definition of ``permissible
prescriptions'' at their discretion where feasible and allowable by law
in the jurisdiction where they provide care. The MIPS eligible
clinician may also choose not to include controlled substances in the
definition of ``permissible prescriptions'' even if such electronic
prescriptions are feasible and allowable by law in the jurisdiction
where they provide care.
Objective: Clinical Decision Support (Alternate Proposal Only).
Objective: Implement clinical decision support (CDS) interventions
focused on improving performance on high-priority health conditions.
Clinical Decision Support (CDS) Interventions Measure: Implement
three clinical decision support interventions related to three CQMs at
a relevant point in patient care for the entire performance period.
Absent three CQMs related to a MIPS eligible clinician's scope of
practice or patient population, the clinical decision support
interventions must be related to high-priority health conditions.
Drug Interaction and Drug-Allergy Checks Measure: The MIPS eligible
clinician has enabled and implemented the functionality for drug-drug
and drug-allergy interaction checks for the entire performance period.
Objective: Computerized Provider Order Entry (Alternate Proposal
Only).
Objective: Use computerized provider order entry (CPOE) for
medication, laboratory, and diagnostic imaging orders directly entered
by any licensed healthcare professional, credentialed medical
assistant, or a medical staff member credentialed to and performing the
equivalent duties of a credentialed medical assistant, who can enter
orders into the medical record per state, local, and professional
guidelines.
Medication Orders Measure: At least one medication order created by
the MIPS eligible clinician during the performance period is recorded
using CPOE.
Denominator: Number of medication orders created by the
MIPS eligible clinician during the performance period.
Numerator: The number of orders in the denominator
recorded using CPOE.
Laboratory Orders Measure: At least one laboratory order created by
the MIPS eligible clinician during the performance period is recorded
using CPOE.
[[Page 77228]]
Denominator: Number of laboratory orders created by the
MIPS eligible clinician during the performance period.
Numerator: The number of orders in the denominator
recorded using CPOE.
Diagnostic Imaging Orders Measure: At least one diagnostic imaging
order created by the MIPS eligible clinician during the performance
period is recorded using CPOE.
Denominator: Number of diagnostic imaging orders created
by the MIPS eligible clinician during the performance period.
Numerator: The number of orders in the denominator
recorded using CPOE.
Objective: Patient Electronic Access.
Objective: The MIPS eligible clinician provides patients (or
patient-authorized representative) with timely electronic access to
their health information and patient-specific education.
Patient Access Measure: For at least one unique patient seen by the
MIPS eligible clinician: (1) The patient (or the patient-authorized
representative) is provided timely access to view online, download, and
transmit his or her health information; and (2) The MIPS eligible
clinician ensures the patient's health information is available for the
patient (or patient-authorized representative) to access using any
application of their choice that is configured to meet the technical
specifications of the Application Programing Interface (API) in the
MIPS eligible clinician's CEHRT.
Denominator: The number of unique patients seen by the
MIPS eligible clinician during the performance period.
Numerator: The number of patients in the denominator (or
patient authorized representative) who are provided timely access to
health information to view online, download, and transmit to a third
party and to access using an application of their choice that is
configured meet the technical specifications of the API in the MIPS
eligible clinician's CEHRT.
Patient-Specific Education Measure: The MIPS eligible clinician
must use clinically relevant information from CEHRT to identify
patient-specific educational resources and provide electronic access to
those materials to at least one unique patient seen by the MIPS
eligible clinician.
Denominator: The number of unique patients seen by the
MIPS eligible clinician during the performance period.
Numerator: The number of patients in the denominator who
were provided electronic access to patient-specific educational
resources using clinically relevant information identified from CEHRT
during the performance period.
Objective: Coordination of Care Through Patient Engagement.
Objective: Use CEHRT to engage with patients or their authorized
representatives about the patient's care.
View, Download, Transmit (VDT) Measure: During the performance
period, at least one unique patient (or patient-authorized
representatives) seen by the MIPS eligible clinician actively engages
with the EHR made accessible by the MIPS eligible clinician. A MIPS
eligible clinician may meet the measure by either--(1) view, download
or transmit to a third party their health information; or (2) access
their health information through the use of an API that can be used by
applications chosen by the patient and configured to the API in the
MIPS eligible clinician's CEHRT; or (3) a combination of (1) and (2).
Denominator: Number of unique patients seen by the MIPS
eligible clinician during the performance period.
Numerator: The number of unique patients (or their
authorized representatives) in the denominator who have viewed online,
downloaded, or transmitted to a third party the patient's health
information during the performance period and the number of unique
patients (or their authorized representatives) in the denominator who
have accessed their health information through the use of an API during
the performance period.
Secure Messaging Measure: For at least one unique patient seen by
the MIPS eligible clinician during the performance period, a secure
message was sent using the electronic messaging function of CEHRT to
the patient (or the patient-authorized representative), or in response
to a secure message sent by the patient (or the patient-authorized
representative).
Denominator: Number of unique patients seen by the MIPS
eligible clinician during the performance period.
Numerator: The number of patients in the denominator for
whom a secure electronic message is sent to the patient (or patient-
authorized representative) or in response to a secure message sent by
the patient (or patient-authorized representative), during the
performance period.
Patient-Generated Health Data Measure: Patient-generated health
data or data from a non-clinical setting is incorporated into the CEHRT
for at least one unique patient seen by the MIPS eligible clinician
during the performance period.
Denominator: Number of unique patients seen by the MIPS
eligible clinician during the performance period.
Numerator: The number of patients in the denominator for
whom data from non-clinical settings, which may include patient-
generated health data, is captured through the CEHRT into the patient
record during the performance period.
Objective: Health Information Exchange.
Objective: The MIPS eligible clinician provides a summary of care
record when transitioning or referring their patient to another setting
of care, receives or retrieves a summary of care record upon the
receipt of a transition or referral or upon the first patient encounter
with a new patient, and incorporates summary of care information from
other health care clinician into their EHR using the functions of
CEHRT.
Send a Summary of Care (formerly Patient Care Record Exchange)
Measure: For at least one transition of care or referral, the MIPS
eligible clinician that transitions or refers their patient to another
setting of care or health care clinician--(1) creates a summary of care
record using CEHRT; and (2) electronically exchanges the summary of
care record.
Denominator: Number of transitions of care and referrals
during the performance period for which the MIPS eligible clinician was
the transferring or referring clinician.
Numerator: The number of transitions of care and referrals
in the denominator where a summary of care record was created using
CEHRT and exchanged electronically.
Request/Accept Summary of Care (formerly Patient Care Record)
Measure: For at least one transition of care or referral received or
patient encounter in which the MIPS eligible clinician has never before
encountered the patient, the MIPS eligible clinician receives or
retrieves and incorporates into the patient's record an electronic
summary of care document.
Denominator: Number of patient encounters during the
performance period for which a MIPS eligible clinician was the
receiving party of a transition or referral or has never before
encountered the patient and for which an electronic summary of care
record is available.
Numerator: Number of patient encounters in the denominator
where an electronic summary of care record
[[Page 77229]]
received is incorporated by the clinician into the CEHRT.
Clinical Information Reconciliation Measure: For at least one
transition of care or referral received or patient encounter in which
the MIPS eligible clinician has never before encountered the patient,
the MIPS eligible clinician performs clinical information
reconciliation. The MIPS eligible clinician must implement clinical
information reconciliation for the following three clinical information
sets: (1) Medication. Review of the patient's medication, including the
name, dosage, frequency, and route of each medication. (2) Medication
allergy. Review of the patient's known medication allergies. (3)
Current Problem list. Review of the patient's current and active
diagnoses.
Denominator: Number of transitions of care or referrals
during the performance period for which the MIPS eligible clinician was
the recipient of the transition or referral or has never before
encountered the patient.
Numerator: The number of transitions of care or referrals
in the denominator where the following three clinical information
reconciliations were performed: Medication list, medication allergy
list, and current problem list.
Objective: Public Health and Clinical Data Registry Reporting.
Objective: The MIPS eligible clinician is in active engagement with
a public health agency or clinical data registry to submit electronic
public health data in a meaningful way using CEHRT, except where
prohibited, and in accordance with applicable law and practice.
Immunization Registry Reporting Measure: The MIPS eligible
clinician is in active engagement with a public health agency to submit
immunization data and receive immunization forecasts and histories from
the public health immunization registry/immunization information system
(IIS).
Syndromic Surveillance Reporting Measure: The MIPS eligible
clinician is in active engagement with a public health agency to submit
syndromic surveillance data from a non-urgent care ambulatory setting
where the jurisdiction accepts syndromic data from such settings and
the standards are clearly defined.
Electronic Case Reporting Measure: The MIPS eligible clinician is
in active engagement with a public health agency to electronically
submit case reporting of reportable conditions.
Public Health Registry Reporting Measure: The MIPS eligible
clinician is in active engagement with a public health agency to submit
data to public health registries.
Clinical Data Registry Reporting Measure: The MIPS eligible
clinician is in active engagement to submit data to a clinical data
registry.
(b) 2017 Advancing Care Information Transition Objectives and Measures
Specifications (Referred to in the Proposed Rule as Modified Stage 2)
We proposed the 2017 Advancing Care Information Transition
objectives and measures for the advancing care information performance
category of MIPS as outlined in this section of the proposed rule. We
note that these objectives and measures have been adapted from the
Modified Stage 2 objectives and measures as finalized in the 2015 EHR
Incentive Programs final rule (80 FR 62793-62825), however, we have not
proposed to maintain the previously established thresholds for MIPS.
Any additional changes to the objectives and measures are outlined in
this section of the proposed rule. For a more detailed discussion of
the Modified Stage 2 objectives and measures, including explanatory
material and defined terms, we refer readers to the 2015 EHR Incentive
Programs final rule (80 FR 62793-62825).
Objective: Protect Patient Health Information.
Objective: Protect electronic protected health information (ePHI)
created or maintained by the CEHRT through the implementation of
appropriate technical, administrative, and physical safeguards.
Security Risk Analysis Measure: Conduct or review a security risk
analysis in accordance with the requirements in 45 CFR 164.308(a)(1),
including addressing the security (to include encryption) of ePHI data
created or maintained by CEHRT in accordance with requirements in 45
CFR164.312(a)(2)(iv) and 45 CFR 164.306(d)(3), and implement security
updates as necessary and correct identified security deficiencies as
part of the MIPS eligible clinician's risk management process.
Objective: Electronic Prescribing.
Objective: MIPS eligible clinicians must generate and transmit
permissible prescriptions electronically.
E-Prescribing Measure: At least one permissible prescription
written by the MIPS eligible clinician is queried for a drug formulary
and transmitted electronically using CEHRT.
Denominator: Number of prescriptions written for drugs
requiring a prescription in order to be dispensed other than controlled
substances during the performance period; or number of prescriptions
written for drugs requiring a prescription in order to be dispensed
during the performance period.
Numerator: The number of prescriptions in the denominator
generated, queried for a drug formulary, and transmitted electronically
using CEHRT.
Objective: Clinical Decision Support (alternate proposal only).
Objective: Implement clinical decision support (CDS) interventions
focused on improving performance on high-priority health conditions.
Clinical Decision Support (CDS) Interventions Measure: Implement
three clinical decision support interventions related to three CQMs at
a relevant point in patient care for the entire performance period.
Absent three CQMs related to a MIPS eligible clinician's scope of
practice or patient population, the clinical decision support
interventions must be related to high-priority health conditions.
Drug Interaction and Drug-Allergy Checks Measure: The MIPS eligible
clinician has enabled and implemented the functionality for drug-drug
and drug-allergy interaction checks for the entire performance period.
Objective: Computerized Provider Order Entry (alternate proposal
only).
Objective: Use computerized provider order entry (CPOE) for
medication, laboratory, and diagnostic imaging orders directly entered
by any licensed healthcare professional, credentialed medical
assistant, or a medical staff member credentialed to and performing the
equivalent duties of a credentialed medical assistant, who can enter
orders into the medical record per state, local, and professional
guidelines.
Medication Orders Measure: At least one medication order created by
the MIPS eligible clinician during the performance period is recorded
using CPOE.
Denominator: Number of medication orders created by the
MIPS eligible clinician during the performance period.
Numerator: The number of orders in the denominator
recorded using CPOE.
Laboratory Orders Measure: At least one laboratory order created by
the MIPS eligible clinician during the performance period is recorded
using CPOE.
Denominator: Number of laboratory orders created by the
MIPS eligible clinician during the performance period.
Numerator: The number of orders in the denominator
recorded using CPOE.
[[Page 77230]]
Diagnostic Imaging Orders Measure: At least one diagnostic imaging
order created by the MIPS eligible clinician during the performance
period is recorded using CPOE.
Denominator: Number of diagnostic imaging orders created
by the MIPS eligible clinician during the performance period.
Numerator: The number of orders in the denominator
recorded using CPOE.
Objective: Patient Electronic Access.
Objective: The MIPS eligible clinician provides patients (or
patient-authorized representative) with timely electronic access to
their health information and patient-specific education.
Patient Access Measure: At least one patient seen by the MIPS
eligible clinician during the performance period is provided timely
access to view online, download, and transmit to a third party their
health information subject to the MIPS eligible clinician's discretion
to withhold certain information.
Denominator: The number of unique patients seen by the
MIPS eligible clinician during the performance period.
Numerator: The number of patients in the denominator (or
patient authorized representative) who are provided timely access to
health information to view online, download, and transmit to a third
party.
View, Download, Transmit (VDT) Measure: At least one patient seen
by the MIPS eligible clinician during the performance period (or
patient-authorized representative) views, downloads or transmits their
health information to a third party during the performance period.
Denominator: Number of unique patients seen by the MIPS
eligible clinician during the performance period.
Numerator: The number of unique patients (or their
authorized representatives) in the denominator who have viewed online,
downloaded, or transmitted to a third party the patient's health
information during the performance period.
Objective: Patient-Specific Education.
Objective: The MIPS eligible clinician provides patients (or
patient authorized representative) with timely electronic access to
their health information and patient-specific education.
Patient-Specific Education Measure: The MIPS eligible clinician
must use clinically relevant information from CEHRT to identify
patient-specific educational resources and provide access to those
materials to at least one unique patient seen by the MIPS eligible
clinician.
Denominator: The number of unique patients seen by the
MIPS eligible clinician during the performance period.
Numerator: The number of patients in the denominator who
were provided access to patient-specific educational resources using
clinically relevant information identified from CEHRT during the
performance period.
Objective: Secure Messaging.
Objective: Use CEHRT to engage with patients or their authorized
representatives about the patient's care.
Secure Messaging Measure: For at least one patient seen by the MIPS
eligible clinician during the performance period, a secure message was
sent using the electronic messaging function of CEHRT to the patient
(or the patient-authorized representative), or in response to a secure
message sent by the patient (or the patient authorized representative)
during the performance period.
Denominator: Number of unique patients seen by the MIPS
eligible clinician during the performance period.
Numerator: The number of patients in the denominator for
whom a secure electronic message is sent to the patient (or patient-
authorized representative) or in response to a secure message sent by
the patient (or patient-authorized representative), during the
performance period.
Objective: Health Information Exchange.
Objective: The MIPS eligible clinician provides a summary of care
record when transitioning or referring their patient to another setting
of care, receives or retrieves a summary of care record upon the
receipt of a transition or referral or upon the first patient encounter
with a new patient, and incorporates summary of care information from
other health care clinicians into their EHR using the functions of
CEHRT.
Health Information Exchange Measure: The MIPS eligible clinician
that transitions or refers their patient to another setting of care or
health care clinician (1) uses CEHRT to create a summary of care
record; and (2) electronically transmits such summary to a receiving
health care clinician for at least one transition of care or referral.
Denominator: Number of transitions of care and referrals
during the performance period for which the EP was the transferring or
referring health care clinician.
Numerator: The number of transitions of care and referrals
in the denominator where a summary of care record was created using
CEHRT and exchanged electronically.
Objective: Medication Reconciliation.
Medication Reconciliation Measure: The MIPS eligible clinician
performs medication reconciliation for at least one transition of care
in which the patient is transitioned into the care of the MIPS eligible
clinician.
Denominator: Number of transitions of care or referrals
during the performance period for which the MIPS eligible clinician was
the recipient of the transition or referral or has never before
encountered the patient.
Numerator: The number of transitions of care or referrals
in the denominator where the following three clinical information
reconciliations were performed: Medication list, medication allergy
list, and current problem list.
Objective: Public Health Reporting.
Objective: The MIPS eligible clinician is in active engagement with
a public health agency or clinical data registry to submit electronic
public health data in a meaningful way using CEHRT, except where
prohibited, and in accordance with applicable law and practice.
Immunization Registry Reporting Measure: The MIPS eligible
clinician is in active engagement with a public health agency to submit
immunization data.
Syndromic Surveillance Reporting Measure: The MIPS eligible
clinician is in active engagement with a public health agency to submit
syndromic surveillance data.
Specialized Registry Reporting Measure: The MIPS eligible clinician
is in active engagement to submit data to a specialized registry.
We note that the 2017 Advancing Care Information Transition
objectives and measures specifications that we proposed are for those
MIPS eligible clinicians that are using 2014 Edition CEHRT. We are
referring to this as the ``2017 Advancing Care Information Transition
objectives and measures'' in this final rule with comment period,
although it was referred to in the proposed rule as the ``Modified
Stage 2 objectives and measures'' set. In addition, in this final rule
with comment period, we refer to the measures specified for the
advancing care information performance category described in section
II.E.5.g.(7) of the proposed rule (81 FR 28221 through 28223) that
correlate to a Stage 3 as the ``Advancing Care Information objectives
and measures'' although it was referred to in the proposed rule as
``MIPS objectives and measures'' set. We note that these terms more are
more specific
[[Page 77231]]
to MIPS, and to the advancing care information performance category
than the terms used in the proposed rule. We have also decided to re-
name several of the proposed measures to use titles that we believe are
more illustrative of the substance of the measures. We note that are
not changing the names of the objectives associated with these
measures. The measures being renamed are as follows:
------------------------------------------------------------------------
Proposed title Revised title
------------------------------------------------------------------------
Patient Access............................ Provide Patient Access.
Patient Care Record Exchange.............. Send a Summary of Care.
Request/Accept Patient Care Record........ Request/Accept Summary of
Care.
------------------------------------------------------------------------
We will be referring to these measures by their revised titles
throughout the remainder of this final rule with comment period.
The following is a summary of the comments we received regarding
the proposal to adopt the objectives and measures detailed at 81 FR
28226-28230 for the advancing care information performance category.
Comment: One commenter suggested the e-Prescribing measure be
included in both the base score as well as the performance score of the
advancing care information performance category to give more
flexibility and offer an opportunity for MIPS eligible clinicians to
earn more points, especially for those MIPS eligible clinicians who
will be using a 2014 Edition CEHRT in 2017.
Response: As several commenters have stated, MIPS eligible
clinicians should not be disadvantaged due to having to report on the
2017 Advancing Care Information Transition objectives and measures in
2017 and we agree. While we have not added the e-Prescribing measure to
the performance score, we have added many other measures to give MIPS
eligible clinicians the opportunity to increase their performance score
under the advancing care information performance category. We refer
readers to section II.E.5.g.(6)(a) of this final rule with comment
period for further discussion of the scoring policy to see how we have
equalized the opportunities for MIPS eligible clinicians reporting
using technology certified to the 2014 Edition and those using
technology certified to the 2015 Edition for the advancing care
information performance category for 2017.
Comment: Many commenters supported the inclusion of the e-
Prescribing measure in the base score of the advancing care information
performance category. Some recommended modifications to the measure
such as changing the threshold to yes/no. A commenter supported
adoption of the e-Prescribing measure on the condition that it have no
minimum threshold and no performance measurement.
Response: We disagree that the threshold should be yes/no as we
continue to believe that reporting a numerator and denominator is more
appropriate because it will provide us with the data necessary to
monitor performance on this measure. Performance on the measure, under
the EHR Incentive Programs, has been consistently much higher than the
thresholds set. We believe that through e-Prescribing, errors from
paper prescriptions are reduced, and therefore, inclusion in the base
score is justified. We also disagree with commenters who recommended
adding e-Prescribing to the performance score. Since historical
performance on this measure under the EHR Incentive Program has been
high, we do not believe that this measure will help MIPS eligible
clinicians distinguish themselves from others in regard to performance,
and thus we have not included it in the performance score.
Comment: A commenter urged CMS to take into account that
measurement of e-Prescribing is often not a measurement of the
physician's diligence or capability, but rather a measurement of
factors completely outside the physician's control, such as the ability
of nearby pharmacies to accept electronic prescriptions. Another
commenter recommended an exception to e-Prescribing for MIPS eligible
clinicians in rural areas where most pharmacies do not have capability
to accept electronic prescriptions.
Response: While we understand these concerns, section
1848(o)(2)(A)(i) of the Act requires electronic prescribing as part of
using CEHRT in a meaningful manner. We note that we proposed an
exclusion for MIPS eligible clinicians who write fewer than 100
permissible prescriptions. Further, we believe the inclusion of the
Electronic Prescribing objective in the base score is appropriate
because, as noted in the Medicare and Medicaid Programs; Electronic
Health Record Incentive Program; Final Rule (75 FR 44338), it is the
most widely adopted form of electronic exchange occurring and has been
proven to reduce medication errors.
Comment: For the e-Prescribing measure, a commenter requested
clarification that MIPS eligible clinicians are permitted to optionally
exclude from the denominator any ``standing'' or ``protocol'' orders
for medications that are predetermined for a given procedure or a given
set of patient characteristics.
Response: We disagree that the denominator should exclude
``standing'' prescriptions and continue to believe that the denominator
should be the number of prescriptions written for drugs requiring a
prescription in order to be dispensed other than controlled substances
during the performance period; or number of prescriptions written for
drugs requiring a prescription in order to be dispensed during the
performance period.
Comment: One commenter stated that the e-Prescribing measure will
be topped out by the time that MIPS is implemented and should be
removed.
Response: While performance on the e-Prescribing measure may be
high for EPs participating in the EHR Incentive Programs, the MIPS
program includes many other clinicians who may have limited experience
with this measure. Furthermore, as we have previously stated, section
1848(o)(2)(A)(i) of the Act requires electronic prescribing as part of
using CEHRT in a meaningful manner, and thus, we have chosen to make it
a required measure under the advancing care information performance
category.
Comment: A commenter asked how e-Prescribing for the prescription
of controlled substances should be measured for MIPS eligible
clinicians who have not yet adopted the upgraded technology associated
with the 2015 Edition.
Response: We proposed (81 FR 28227) that MIPS eligible clinicians
would continue to have the option to include or not include controlled
substances that can be electronically prescribed in the denominator of
the e-Prescribing measure. This means that MIPS eligible clinicians may
choose to include controlled substances in the definition of
``permissible prescriptions'' at their discretion where feasible and
allowable by law in the jurisdiction where they provide care. The MIPS
eligible clinician may also choose not to include controlled substances
in the definition of ``permissible prescriptions'' even if such
electronic prescriptions are feasible and allowable by law in the
jurisdiction where they provide care. This policy is the same for MIPS
eligible clinicians using EHR technology certified to the 2014 and the
2015 Editions.
Comment: Many commenters supported the inclusion of the Patient
Electronic Access objective. Many commenters appreciated the emphasis
on patient electronic access throughout the advancing care information
performance category and agreed with
[[Page 77232]]
providing flexibility for MIPS eligible clinicians to provide
information to patients.
Response: We appreciate the support and will require the Provide
Patient Access measure of the Patient Electronic Access objective in
the base score of the advancing care information performance category.
We continue to believe that through providing access to information and
increased patient engagement, health care outcomes can be improved.
Comment: Many commenters claimed that MIPS eligible clinicians will
continue to struggle meeting the Patient Electronic Access objective.
Some commenters believe the Patient Electronic Access objective holds
MIPS eligible clinicians responsible for the actions of patients and
other physicians outside of their control. A few noted that internet
access issues will suppress small and rural MIPS eligible clinicians'
performance scores in the advancing care information performance
category, particularly in achieving success with Patient Electronic
Access. Another commenter expressed concern regarding the Patient
Electronic Access objective due to a lack of computers and electronic
access among minority and non-English speaking patients. One commenter
recommended that MIPS eligible clinicians be given 4 business days to
provide this information, rather than 48 hours because MIPS eligible
clinicians need time to review, correct and verify the accuracy of the
information.
Response: While we understand these concerns, we believe providing
patients' access to their health information is a critical step in
improving patient care, increasing transparency and engaging patients.
Under the Patient Electronic Access Objective, the Provide Patient
Access measure only requires that patients are provided timely access
to view online, download, and transmit his or her health information;
and that the information is available to access using any application
of their choice that is configured to meet the technical specifications
of the Application Programing Interface (API) in the MIPS eligible
clinician's CEHRT. This measure is required for the base score. The
base score requirement is for MIPS eligible clinicians to report a
numerator (of at least one) and a denominator, which we believe is
reasonable and achievable by most MIPS eligible clinicians regardless
of their practice circumstances or the characteristics of their patient
population. This measure does not require that the patient take any
action. (Note the View, Download or Transmit measure under the
Coordination of Care Through Patient Engagement Objective depends on
the actions of the patient but the measure is part of the performance
score and is not required.) The other measure under the Patient
Electronic Access Objective is the Patient-Specific Access measure
which is part of the performance score and is not required.
We additionally note that we have increased flexibility of our
scoring methodology allowing MIPS eligible clinicians to focus on
measures that best represent their practice in the performance score,
and thus this measure is optional for reporting as part of the
performance score.
Comment: A few commenters suggested that both measures in the
Patient Electronic Access objective be retired. They believe that CMS
data shows most clinicians score very well on Patient-Specific
Education and Provide Patient Access measures, and thus, should not
have to report on them. One commenter suggests that the Patient-
Specific education measure be considered ``topped out'' due to
historically high performance and stated concern that the manner in
which the Patient-Specific education measure is currently specified is
overly constrained and limiting to providers who may prefer workflows
to provide patient education beyond what is permitted by CMS and
certification.
Response: We disagree. As we have indicated previously, we believe
these measures are a critical step to improving patient health,
increasing transparency and engaging patients in their care. We
additionally note there are certain types of clinicians that were not
eligible to participate under the EHR Incentive Programs but are
considered MIPS eligible clinicians, and we believe that it is
appropriate to include the Patient Electronic Access objective and its
associated measures. We note that under the Stage 2 of the EHR
Incentive Programs, EPs achieved an average of 91 percent on the
Provide Patient Access measure. While under the EHR Incentive Programs
EPs performed well, we will be gathering data on MIPS eligible
clinicians to determine whether the Patient-Specific Education and
Patient Electronic Access measures should be included in future MIPS
performance periods. We welcome specific examples suggestions for
changes to the existing measures and potential new measures to replace
the existing ones.
Comment: A commenter sought clarification on the Patient Electronic
Access objective around the API availability and the use of 2014
Edition CEHRT. Another commenter asked what is meant by the phrase
``subject to the MIPS eligible clinician's discretion to withhold
certain information'' and asked why it was included.
Response: The specifications of the 2017 Advancing Care Information
Transition Provide Patient Access measure do not require use of an API,
and thus MIPS eligible clinicians who use EHR technology certified to
the 2014 Edition and report this measure would not need to use an API
for this measure. We refer readers to section II.E.5.g.(7) of this
final rule with comment period for a description of the measure
specifications. The Advancing Care Information Provide Patient Access
measure is identical to the Patient Electronic Access measure that was
finalized in the 2015 EHR Incentive Programs final rule for Stage 3. We
maintain that MIPS eligible clinicians who provide electronic access to
patient health information should have the ability to withhold any
information from disclosure if the disclosure of the information is
prohibited by federal, state or local laws or such information, if
provided, may result in significant patient harm. We refer readers to
the 2015 EHR Incentive Programs final rule (80 FR 62841-FR 62852) for a
discussion of the Stage 3 Patient Electronic Access measure.
Comment: A commenter suggested that the View, Download and Transmit
and Secure Messaging measures be made optional and noted the previous
reductions in thresholds as an indication that there are significant
challenges to meeting these measures.
Response: While we understand that there are challenges with these
measures we continue to believe that the measures in the Coordination
of Care Through Patient Engagement objective is an essential component
of improving health care. We note that under our revised scoring
methodology, these measures will not be required in the base score of
the advancing care information category.
Comment: One commenter believed that although it is a reasonable
policy for CMS to require MIPS eligible clinicians to make information
electronically available to their patients within a reasonable time
frame, they are very concerned about numerator requirements of the
View, Download, or Transmit measure that only takes into account the
actions of patients. Some stated that MIPS eligible clinicians who are
diligent in making information securely available to their patients
should not be penalized simply because the patient is not interested in
accessing the information.
Response: The View, Download, or Transmit measure is not required
in the
[[Page 77233]]
base score of the advancing care information performance category under
our final scoring policy. It is available for MIPS eligible clinicians
who choose to report on the measure to increase their performance
score.
Comment: A few commenters recommended removing the Send a Summary
of Care measure (formerly named the Patient Care Record Exchange
measure) under the Health Information Exchange objective from the base
score because some specialists may not have any transitions of care.
One suggested that a minimum exclusion be provided for MIPS eligible
clinicians that do not transition care or refer patients during the
performance period.
Response: We disagree with the recommendation to remove this
measure from the base score. One of the primary focuses of the
advancing care information performance category is to encourage the
exchange of health information using CEHRT. The Send a Summary of Care
measure encourages the incorporation of summary of care information
from other health care providers and clinicians into the MIPS eligible
clinician's EHR to support better patient outcomes. We believe that
MIPS eligible clinicians, particularly specialists, have the
opportunity to send or receive a summary of care record from another
care setting or clinician at least once during a MIPS performance
period. In addition, since meeting the requirements of this measure to
earn the base score involves reporting a numerator and denominator of
at least one rather than meeting a percentage threshold, we believe
this offers enough flexibility for MIPS eligible clinicians who are
concerned that they rarely exchange patient health information with
other providers.
Comment: A commenter requested that the Patient-Specific Education
measure under the Patient Electronic Access objective not be limited to
educational materials identified by CEHRT as they believe many medical
specialty societies have developed patient-facing Web sites and
educational materials.
Response: We appreciate this suggestion and will consider in future
years of MIPS. However, as finalized for the 2017 performance period,
the Patient-Specific Education measure is limited to educational
materials identified by CEHRT. We note that we have refined our
proposal and in 2017, this measure is not required in the base score of
the advancing care information category. MIPS eligible clinicians may
choose whether to report this measure as part of the performance score.
Comment: One commenter asked for clarification about when the
patient-specific education was to be provided. The 2017 Advancing Care
Information Transition measure in the proposed rule (based on Modified
Stage 2 measure of the EHR Incentive Program) requires that patient-
specific education be provided during the performance period while the
2015 EHR Incentive Programs final rule allows patient education to be
provided any time between the start of the EHR reporting period and the
date of attestation to count toward the numerator.
Response: While the commenter is correct about the policy
established for the EHR Incentive Programs, under the MIPS, the
patient-specific education must be provided within the performance
period. We additionally note for the commenter that we included a
proposal for the EHR Incentive Programs related to measure calculations
for actions outside the EHR reporting period in the recent hospital
Outpatient Prospective Payment System Proposed Rule (81 FR 45745
through 45746) for reporting in CY 2017 for the EHR Incentive Program.
Comment: A commenter requested that we stay consistent with the
Stage 3 measure exclusion for the Patient-Specific Education measure
and allow MIPS eligible clinicians with no office visits during the
performance period be permitted to report a ``null value'' and achieve
full base and performance score credit.
Response: In our final scoring methodology for the advancing care
information category, the Patient-Specific Education measure is not a
required measure for reporting in the base score, and thus we do not
believe it is necessary to provide an exclusion for this measure.
Instead MIPS eligible clinicians may choose to report the measure to
earn credit in the advancing care information performance score. We
believe it is appropriate to require the reporting of a numerator and
denominator to add to the performance score. We refer readers to
section II.E.5.g.(6)(a) for more discussion of our final scoring
policy. We additionally note that there are exclusions for MIPS
eligible clinicians who are considered non-patient facing, and direct
readers to section II.E.3. of this final rule with comment period for
further discussion of this policy.
Comment: A commenter questioned whether the MIPS eligible clinician
or the patient is responsible for the View, Download, and Transmit
measure under the Coordination of Care Through Patient Engagement
objective as the description states that the MIPS eligible clinician
may meet the measure and does not reflect that the necessity of a
patient viewing, downloading, and transmitting.
Response: We appreciate that the commenter brought this error to
our attention. Our intention was that a MIPS eligible clinician may
meet the measure if at least one unique patient viewed, downloaded, or
transmitted to a third party their health information. We are revising
the Advancing Care Information measure under the Coordination of Care
Through Patient Engagement objective to reflect our intended policy.
Comment: Some commenters supported the inclusion of the Secure
Messaging measure. A few recommended that it be converted into a yes/no
measure. A commenter supported adoption of the proposed Secure
Messaging measure, provided that the finalized measure have no minimum
threshold and no performance measurement. A few commenters requested
the removal of the requirement for secure messaging between patient and
MIPS eligible clinician for nursing home residents and to patients who
receive their primary care at home, since patients will not sign-up. A
commenter recommended changing the numerator of the Secure Messaging
measure to ``responses to secure messages sent by patients,'' and the
denominator to ``all secure messages sent by patients,'' to address the
misalignment between the numerator and denominator in the proposed
measure.
Response: We appreciate the comments and the support for the Secure
Messaging measure. In our revised scoring policy, we are finalizing our
scoring methodology such that the Secure Messaging measure is not one
of the required measures of the advancing care information performance
category. MIPS eligible clinicians may still choose to report the
measure to earn credit in the performance score, and thus have the
option to determine whether this measure represents their practice. We
refer readers to section II.E.5.g.(6)(a) of this final rule with
comment period for further discussion of our final scoring policy.
We disagree with the suggestion to change Secure Messaging to a
yes/no measure, or to change the numerator and denominator as this
measure is meant to promote the sending of secure messages by the MIPS
eligible clinician and not by patients. We believe that it is more
appropriate for the numerator to consist of the number of patients
found in the denominator to whom a secure electronic message is sent or
in response
[[Page 77234]]
to a secure message sent by the patient (or patient-authorized
representative), during the performance period.
Comment: Some commenters opposed the inclusion of the Health
Information Exchange objective and the associated measures: Send a
Summary of Care, Request/Accept Summary of Care, and Clinical
Information Reconciliation. They noted that it holds MIPS eligible
clinicians responsible for information over which they have no control
and recommended the objective be removed. The commenters believed that
the Health Information Exchange objective holds MIPS eligible
clinicians responsible for the actions of patients and other physicians
outside of their control. Other commenters opposed the measures
included in the Health Information Exchange objective because those
measures overestimate the interoperability of EHR technology.
Commenters also expressed concern that this measure would emphasize
quantity of information, rather the sharing of relevant information. A
few commenters indicated that past experience with the Health
Information Exchange objective in the EHR Incentive Programs has been
challenging for EPs. Challenges include costs, lack of contacts at
hospital systems to effective communicate where an electronic
transition of care document should be sent, and inadequate training and
understanding of how to use EHR functionality even if fully enabled.
Response: While we appreciate these concerns, we believe the
benefits health information exchange outweigh the challenges. As we
stated in the 2015 EHR Incentive Programs final rule (80 FR 62804), we
believe that the electronic exchange of health information between
providers and clinicians would encourage the sharing of the patient
care summary from one provider or clinician to another and important
information that the patient may not have been able to provide. This
can significantly improve the quality and safety of referral care and
reduce unnecessary and redundant testing. EHRs and the electronic
exchange of health information, either directly or through health
information exchanges, can reduce the burden of such communication.
Therefore, we believe it is appropriate to include the Health
Information Exchange objective and include the Send the Summary of Care
and the Request/Accept Summary of Care measures as required in the base
score of the advancing care information performance category.
Comment: A commenter was concerned about MIPS eligible clinicians
who do not have access to a health information exchange and in these
cases, recommended a hardship exception option for this objective.
Response: We note that there is no requirement to have access to a
health information exchange for the Health Information Exchange
objective. Rather for the Request/Accept Summary of Care measure
(formerly Patient Care Record measure), the summary of care record must
be electronically exchanged. We note that the intent for flexibility
around exchange via any electronic means is to promote and facilitate a
wide range of options. We refer readers to the discussion of the Health
Information Exchange objective at 80 FR 62852 through 62862 as it
provides a thorough discussion of transport mechanisms for the summary
of care record.
Comment: Some commenters believe that internet access issues will
stifle performance in the advancing care information performance
category for MIPS eligible clinicians in small and rural settings,
especially those with high staff turnover, in trying to satisfy the
Health Information Exchange objective.
Response: We understand this concern and recognize that nationwide
access to broadband is still a challenge for some MIPS eligible
clinicians. If a MIPS eligible clinician does not have sufficient
internet access, they may qualify for reweighting of the advancing care
information performance category score. We refer readers to the
discussion of MIPS eligible clinicians facing a significant hardship in
section II.E.5.g.(8)(a)(ii) of this final rule with comment period.
Comment: A commenter stated that the Health Information Exchange
objective does not adequately reflect EHR interoperability. They
believe the metric is too focused on the quantity of information moved
and not the relevance of these exchanges. They urged CMS to re-focus
the advancing care information performance category on interoperability
by developing specialty-specific interoperability use cases rather than
the measuring the quantity of data exchanged.
Response: We are very interested in adopting measures that reflect
interoperability. We urge interested parties to participate in our
solicitation call for new measures that will be available in the next
few months.
Comment: A commenter urged CMS to clarify whether the denominator
of the Request/Accept Summary of Care measure under the Health
Information Exchange objective includes the number of transitions of
care sent to the MIPS eligible clinicians with CEHRT, and whether MIPS
eligible clinicians are able to exclude referrals from this measure if
the receiving clinician does not have CEHRT fully implemented.
Response: The calculation of the denominator for the 2017 Advancing
Care Information Transition measure, Health Information Exchange, is
different from that of the Advancing Care Information measure, Request/
Accept Summary of Care. As we noted in the 2015 EHR Incentive Programs
final rule (80 FR 62804-62806) we did not adopt a requirement for the
Modified Stage 2 Health Information Exchange measure (which correlates
to the 2017 Advancing Care Information Transition measure) that the
recipient to whom the EP sends a summary of care document possess CEHRT
or even an EHR in order to be the recipient of an electronic summary of
care document. However, measure 2 of the Stage 3 Health Information
Exchange objective (which correlates to the Advancing Care Information
measure, Request/Accept Summary of Care) was finalized such that the
EP, as a recipient of a transition or referral, incorporates an
electronic summary of care document into CEHRT. Therefore, as we
proposed for MIPS, we are finalizing our policy such that transitions
and referrals from recipients who do not possess CEHRT could be
excluded from the denominator of the 2017 Advancing Care Information
Transition measure, Health Information Exchange, but should be included
for the denominator of the MIPS measure, Request/Accept Summary of
Care.
We disagree that the Advancing Care Information measure should be
limited to only include recipients who possess CEHRT for the Request/
Accept Summary of Care measure, as that would limit support for MIPS
eligible clinicians exchanging health information with providers and
clinicians across a wide range of settings. We further note that,
consistent with the policy set forth in the 2015 EHR Incentive Programs
final rule (80 FR 62852-62862), MIPS eligible clinicians and groups may
send the electronic summary of care document via any electronic means
so long as the MIPS eligible clinician sending the summary of care
record is using the standards established for the creation of the
electronic summary of care document.
Comment: Many commenters strongly supported the inclusion of the
Health Information Exchange objective and associated measures. They
noted benefits such as the incorporation and use of both non-clinical
and patient-generated health data as well as supplementing medication
reconciliation for transitions of care
[[Page 77235]]
with medication allergies and problems as part of the Health
Information Exchange objective. They supported the prioritization of
measures that promote the policy objectives of interoperability, care
coordination, and patient engagement. They supported measures that
incorporate the use of online access to health information and secure
email, and the collection and integration of data from non-clinical
sources.
Response: We agree and will continue to require the Health
Information Exchange objective in the advancing care information
performance category. In addition section 1848(o)(2)(A)(ii) of the Act
requires the electronic exchange of health information.
Comment: A commenter noted that the definition of patient-generated
health data inappropriately focuses on the device generating the data
rather than the patient and recommended expanding the definition to
include other more relevant data sources such as filling out forms and
surveys, and by self-report. One commenter believed there should be a
distinction between patient-generated and device-generated data and
that MIPS eligible clinicians should have the ability to review data
sources as part of the record similar to a track change function.
Response: For the Patient-Generated Health Data measure, the
calculation of the numerator incorporates both health data from non-
clinical settings, as well as health data generated by the patient. We
will consider the suggestion for expanding the types of health data to
include for this measure, such as some patient-reported information, in
future rulemaking.
Comment: For the Clinical Information Reconciliation measure,
specifically the medication reconciliation portion, the commenter
believed the updated measure for Stage 3 adds further definition to the
data that must be reviewed.
Response: We note that the Clinical Information Reconciliation
measure under the Health Information Exchange objective, we are
adopting for the advancing care information performance category is the
same as the Stage 3 measure under the EHR Incentive Program with the
threshold and exclusion removed.
Comment: For the Medication Reconciliation measure, the proposed
2017 Advancing Care Information Transition measure adds the medication
allergy list and current problems list to the items that must be
reconciled. One commenter indicated that this significantly expands the
current Modified Stage 2 measure such that a change in workflow is
required. In addition, functionality to reconcile medication allergies
and problems are not included in the 2014 Edition of CEHRT.
Response: The 2017 Advancing Care Information Transition Medication
Reconciliation measure is still limited to medication reconciliation as
it was for the Modified Stage 2 measure. For the Advancing Care
Information measure, we proposed to include medication list, medication
allergy list and current problem list under the Clinical Information
Reconciliation measure which aligns with the third measure under the
Health Information Exchange objective for Stage 3 of the EHR Incentive
Programs and requires technology certified to the 2015 Edition.
Comment: A few commenters requested, in addition to eliminating the
requirement to report the CPOE and CDS objectives and associated
measures that MIPS eligible clinicians only be required to report on
the remaining objectives and measures that are relevant to their
practice.
Response: In developing our final scoring methodology for the
advancing care information performance category for a performance
period in 2017, we have significantly reduced the number of required
measures from 11 to five. We have moved more measures to the
performance score so the MIPS eligible clinicians are able to tailor
their participation by relevance to their practices. We refer readers
to section II.E.5.g.(6)(a) for more discussion of our final scoring
policy.
Comment: The majority of commenters supported the proposal to
include the Public Health and Clinical Data Registry Reporting
objective in the advancing care information performance category. Many
commenters particularly praised the reduction in requirements of the
objective by only requiring the reporting of the Immunization Registry
Reporting measure while including the remaining measures as optional to
earn a bonus point. However, some commenters expressed concern that by
only requiring one measure to report, the importance of public health
registry reporting is downplayed. Many commenters suggested MIPS
eligible clinicians be encouraged and incentivized to report to
registries beyond Immunization Registry Reporting.
Several commenters indicated that the Public Health Registry
reporting objective would be better suited as an activity in the
improvement activities performance category and public health registry
reporting should be counted for points in that performance category
rather than the advancing care information performance category.
Response: We appreciate the support of our proposal to reduce the
reporting burden for the Public Health and Clinical Data Registry
Reporting objective. We agree that given the importance and benefit to
MIPS eligible clinicians of submitting data to multiple registries,
that more points should be awarded for reporting to additional
registries under the objective. As we have amended our proposal and the
Immunization Registry Reporting measure is no longer a required measure
in the base score, MIPS eligible clinicians may still choose to report
the measure to increase their performance score. In addition, we are
increasing the bonus to 5 percent for reporting one or more public
health or clinical data registries.
We disagree that the Public Health and Clinical Data Registry
reporting objective should be in the improvement activities performance
category. The proposed measures in the Public Health and Clinical Data
Registry Reporting objective focus on active, ongoing engagement with
registries, as well as electronic submission of data, which we believe
are within the scope of effectively using CEHRT to achieve the goals of
the advancing care information performance category.
Comment: A commenter supported the proposal to include the Public
Health and Clinical Data Registry reporting but encouraged CMS to
require reporting to cancer registries, because accurate and detailed
cancer information enables better public policy development.
Response: We have not created a separate cancer registry reporting
measure for MIPS because we believe that such reporting is captured
under existing public health registry reporting measures. If a MIPS
eligible clinician is reporting under the 2017 Advancing Care
Information Transition objectives and measures, they may report cancer
registry data under the specialized registry measure. However, if the
eligible clinician or group chooses to do so, they must use the 2014
Edition certification criteria specific to cancer case reporting in
order to meet the measure. This measure is an exception to the flexible
CEHRT requirements for the 2017 Advancing Care Information Transition
objectives and measures Specialized Registry Reporting measure and for
this reason we previously finalized a policy that if a participant has
the CEHRT available and chooses to report to meet the measure they may
do so but they are not required to consider a cancer registry in their
specialized
[[Page 77236]]
registry selection (80 FR 62823). If the MIPS eligible clinician is
reporting under the MIPS advancing care information performance
category measures, active engagement with a cancer registry would meet
the Public Health Registry Reporting measure and would require the use
of technology certified to the cancer case reporting criteria of the
2014 or 2015 Edition.
If a MIPS eligible clinician is reporting under the 2017 Advancing
Care Information Transition objectives and measures, they may report
cancer registry data under the Specialized Registry measure. If they
are reporting under the Advancing Care Information objectives and
measures, they would report under the Public Health Registry Reporting
measure.
Comment: One commenter expressed concern that many of the measures
under the Public Health and Clinical Registry Reporting objective do
not apply to all practices, and for those to whom it does apply, the
measures should not burden MIPS eligible clinicians by requiring them
to join a registry in order to report.
Response: We appreciate the concern that different registries have
different requirements for participation and they may not apply to a
MIPS eligible clinician's practice. We note that we have amended our
proposal and the Immunization Registry Reporting measure is no longer a
required measure, but MIPS eligible clinicians may report the measure
to earn credit in the performance score. In addition, we are only
awarding a bonus score for reporting to additional public health or
clinical data registries. We believe this offers enough flexibility for
MIPS eligible clinician who may experience challenges engaging with a
public health or clinical registry.
Comment: A commenter recommended that for performance period 2017,
MIPS eligible clinicians be required to be in active engagement with
two public health registries and to report on two public health
registry reporting measures, for example, Immunization Registry
Reporting and one optional public health registry reporting measure.
Several commenters recommended that for performance periods in 2018 and
beyond, MIPS eligible clinicians be required to be in active engagement
with three public health registries and to report on three public
health registry reporting measures, for example, Immunization Registry
Reporting, Electronic Public Health Registry Reporting, and one
specialized public health registry.
Response: While we appreciate these comments, we are not requiring
Public Health and Clinical Data Registry Reporting in the base score of
the advancing care information performance category. MIPS eligible
clinicians can increase their performance score if they choose to
report on the Immunization Registry Reporting measure in 2017. We are
also finalizing as part of our scoring policy that MIPS eligible
clinicians can earn a bonus score for reporting to additional public
health registries.
Comment: A commenter stated that our proposal to only require
Immunization Registry Reporting measure will likely result in a
decrease in public health reporting. They urged CMS to retain the
public health reporting requirements from the EHR Incentive Programs.
While another noted that after putting significant effort into meeting
EHR Incentive Program Stage 2 requirements of submitting to two public
health registries, they were disappointed that the proposed MACRA rule
would only require data submission to an immunization registry.
Response: While we understand these concerns, and we believe that
the Public Health and Clinical Registry Reporting measures should not
be included in the base score of the advancing care information
performance category and have amended our proposal to specify that the
Immunization Registry Reporting measure is no longer a required measure
in the base score. We agree with the commenter that many EPs have
successfully achieved active engagement with more than one clinical
data registry over the past few years. However, we also know that many
MIPS eligible clinicians are still working diligently toward meeting
the requirements of this objective. We believe that an opportunity for
growth and improvement continues to exist, especially among a large
proportion of MIPS eligible clinicians who did not previously
participate in the Medicare and Medicaid EHR incentive programs.
Therefore, MIPS eligible clinicians may still choose to report the
Immunization Registry Reporting measure to increase their performance
score. In addition, MIPS eligible clinicians who choose to report on
additional public health and clinical data registry reporting measures
may increase their bonus score toward their advancing care information
performance category score.
Comment: Some commenters supported the inclusion of the
Immunization Registry Reporting measure. They noted that immunization
registries are the most widely available and applicable public health
registries and previously included for EPs in meaningful use. The
continuation of the exclusions for MIPS eligible clinicians who do not
administer immunizations, or whose local registries do not accept data
according to the standards adopted in certification, ensures that MIPS
eligible clinicians are not penalized for factors beyond their control.
Response: While we appreciate these comments, we are not requiring
public health reporting in the base score of the advancing care
information performance category. However, MIPS eligible clinicians may
still choose to report the Immunization Registry Reporting measure to
increase their advancing care information performance score.
Comment: A commenter recommended that there be a resource or
listing of all available public health and clinical registries that
MIPS eligible clinicians could engage with to meet the measures of the
Public Health and Clinical Data Registry Reporting objective.
Response: We are planning a to develop a centralized public health
registry repository to assist MIPS eligible clinicians in finding
public health registries available and clinically relevant to their
practice that are accepting electronic submissions.
Comment: A commenter questioned why we had modified the Stage 3
measure for syndromic surveillance from an ``urgent care setting'' to a
``non-urgent'' care setting under MIPS.
Response: This was an oversight on our part. As we noted in the
2015 final rule (80 FR 62866) few jurisdictions accept syndromic
surveillance from non-urgent care EPs. We are modifying the measure for
MIPS so that it aligns with the Stage 3 measure that we finalized for
the EHR Incentive Program and limit the surveillance data to be
submitted to data from an urgent care setting.
After consideration of the comments, we are finalizing our proposal
for the Advancing Care Information objectives and measures and the 2017
Advancing Care Information Transition objectives and measures as
proposed with modifications to correct language in certain measures as
noted as follows:
For the 2017 Advancing Care Information Transition Medication
Reconciliation measure: We are maintaining the Modified Stage 2
numerator as follows: ``Numerator: The number of transitions of care in
the denominator where medication reconciliation was performed.
For the Advancing Care Information View, Download, Transmit (VDT)
measure: During the performance period, at least one unique patient (or
patient-authorized representatives) seen by the MIPS eligible clinician
actively engages with the EHR made accessible
[[Page 77237]]
by the MIPS eligible clinician. An MIPS eligible clinician may meet the
measure by a patient either--(1) viewing, downloading. or transmitting
to a third party their health information; or (2) accessing their
health information through the use of an API that can be used by
applications chosen by the patient and configured to the API in the
MIPS eligible clinician's CEHRT; or (3) a combination of (1) and (2).
For the Advancing Care Information Syndromic Surveillance Reporting
measure: The MIPS eligible clinician is in active engagement with a
public health agency to submit surveillance data from an urgent care
ambulatory setting where the jurisdiction accepts syndromic data from
such settings and the standards are clearly defined.
We note that we will consider new measures for future years of the
program, and invite comment on what types of EHR measures and
measurement should be considered for inclusion in the program. In
addition we invite comments on how to make the measures that we are
adopting in this final rule more stringent in the future, especially in
light of the statutory requirements.
(c) Exclusions
In the 2015 EHR Incentive Programs final rule (80 FR 62829 through
62871) we outlined certain exclusions from the objectives and measures
of meaningful use for EPs who perform low numbers of a particular
action or activity for a given measure (for example, an EP who writes
fewer than 100 permissible prescriptions during the EHR reporting
period would be granted an exclusion for the Electronic Prescribing
measure) or for EPs who had no office visits during the EHR reporting
period. Moving forward, we believe that the proposed MIPS exclusion
criteria as proposed at (81 FR 28173-28176) and as further discussed in
section II.E.1. of this final rule with comment period, and advancing
care information performance category scoring methodology together
accomplish the same end as the previously established exclusions for
the majority of the advancing care information performance category
measures. By excluding from MIPS those clinicians who do not exceed the
low-volume threshold (proposed in section II.E.3.c. of the proposed
rule, as MIPS eligible clinicians who, during the performance period,
have Medicare billing charges less than or equal to $10,000 and provide
care for 100 or fewer Part B-enrolled Medicare beneficiaries), we
believe exclusions for most of the individual advancing care
information performance category measures are no longer necessary. The
additional flexibility afforded by the proposed advancing care
information performance category scoring methodology eliminates
required thresholds for measures and allows MIPS eligible clinicians to
focus on, and therefore report higher numbers for, measures that are
more relevant to their practice.
We noted that EPs who write less than 100 permissible prescriptions
during the EHR reporting period are allowed an exclusion for the e-
Prescribing measure under the EHR Incentive Program (80 FR 62834),
which we did not propose for MIPS. We note that the Electronic
Prescribing objective would not be part of the performance score under
our proposals, and thus, MIPS eligible clinicians who write very low
numbers of permissible prescriptions would not be at a disadvantage in
relation to other MIPS eligible clinicians when seeking to achieve a
maximum advancing care information performance category score. For the
purposes of the base score, we proposed that those MIPS eligible
clinicians who write fewer than 100 permissible prescriptions in a
performance period may elect to report their numerator and denominator
(if they have at least one permissible prescription for the numerator),
or they may report a null value. This is consistent with prior policy
which allowed flexibility for clinicians in similar circumstances to
choose an alternate exclusion (80 FR 62789).
In addition, in the 2015 EHR Incentive Programs final rule, we
adopted a set of exclusions for the Immunization Registry Reporting
measure under the Public Health and Clinical Data Registry Reporting
objective (80 FR 62870). We recognize that some types of clinicians do
not administer immunizations, and therefore proposed to maintain the
previously established exclusions for the Immunization Registry
Reporting measure. We therefore proposed that these MIPS eligible
clinicians may elect to report their yes/no statement if applicable, or
they may report a null value (if the previously established exclusions
apply) for purposes of reporting the base score.
We note that we did not propose to maintain any of the other
exclusions established under the EHR Incentive Program, however, we
solicited comment on whether other exclusions should be considered
under the advancing care information performance category under the
MIPS.
The following is a summary of the comments we received regarding
our exclusion proposal.
Comment: Many commenters supported our proposal to provide an
exclusion for the e-Prescribing measure to those MIPS eligible
clinicians who write less than 100 permissible prescriptions during the
performance period, and many commenters requested additional
exclusions. Commenters disagreed with the removal of exclusions for
other objectives, such as the transitions of care measure under the
Health Information Exchange objective that existed under the EHR
Incentive Programs. Many suggested continuing all EHR Incentive
Programs Modified Stage 2 and Stage 3 exclusions under MIPS. Others
suggested that exclusions be added to the Health Information Exchange
measure under 2014 Edition CEHRT and the MIPS Clinical Information
Reconciliation measure. Some suggested an exclusion for the Health
Information Exchange Objective be added if a MIPS eligible clinician
has fewer than 100 external referrals. Commenters also requested
exclusions for clinicians who do not refer patients and those with
insufficient broadband availability. Commenters recommended low-volume
exclusions for various measures including e-Prescribing, Provide
Patient Access, and the measures under the Coordination of Care Through
Patient Engagement, and Health Information Exchange objectives.
Commenters also urged the addition of an exclusion for MIPS eligible
clinicians practicing in multiple locations because they may encounter
specific hardships due to CEHRT availability. Some requested that any
meaningful use exclusions for Public Health and Clinical Data Registry
Reporting remain in effect for those using the 2014 CEHRT. Some
requested an exclusion should exist for the Syndromic Surveillance
Reporting measure for those physicians who do not directly or rarely
diagnose or treat conditions related to syndromic surveillance. Another
commenter requested that we maintain the meaningful use Stage 3
exclusion for the Patient-Specific Education and that MIPS eligible
clinicians with no office visits during the performance period be
permitted to report a ``null value'' and achieve full base and
performance score credit.
Response: We note that we are finalizing fewer required measures
for the base score of the advancing care information performance
category than we had proposed. As there are now fewer required
measures, we do not believe that it is necessary to create additional
exclusions for measures which are now optional for reporting. In
[[Page 77238]]
addition, as we have moved the Immunization Registry Reporting measure
from ``required'' in the base score to ``not required'' in the base
score, we are not finalizing our proposal to provide an exclusion for
those MIPS eligible clinicians who do not administer immunizations
during the performance period. The exclusion is no longer necessary
because MIPS eligible clinicians now have the option of whether or not
to report on Immunization Registry Reporting to receive credit for this
measure under the performance score of the advancing care information
performance category.
Comment: A few commenters supported the elimination of exclusions
and noted that the elimination of thresholds enable MIPS eligible
clinicians to focus more on quality patient care and less on meeting
thresholds.
Response: We appreciate the support of these commenters and agree
that the fewer required measures and elimination of thresholds have
enabled the removal of many of the exclusions that existed under the
EHR Incentive Programs.
After consideration of the comments, we are finalizing our
exclusion policy as proposed with the following modification. We are
not finalizing the exclusions for the Immunization Registry Reporting
measure under the Public Health and Clinical Data Registry Reporting
objective for those MIPS eligible clinicians who do not administer
immunizations as part of their practice.
(8) Additional Considerations
(a) Reweighting of the Advancing Care Information Performance Category
for MIPS Eligible Clinicians Without Sufficient Measures Applicable and
Available
As discussed in the proposed rule, section 101(b)(1)(A) of the
MACRA amended section 1848(a)(7)(A) of the Act to sunset the meaningful
use payment adjustment at the end of CY 2018. Section 1848(a)(7) of the
Act includes certain statutory exceptions to the meaningful use payment
adjustment under section 1848(a)(7)(A) of the Act. Specifically,
section 1848(a)(7)(D) of the Act exempts hospital-based EPs from the
application of the payment adjustment under section 1848(a)(7)(A) of
the Act. In addition, section 1848(a)(7)(B) of the Act provides that
the Secretary may exempt an EP who is not a meaningful EHR user for the
EHR reporting period for the year from the application of the payment
adjustment under section 1848(a)(7)(A) of the Act if the Secretary
determines that compliance with the requirements for being a meaningful
EHR user would result in a significant hardship, such as in the case of
an EP who practices in a rural area without sufficient internet access.
The MACRA did not maintain these statutory exceptions for the advancing
care information performance category of the MIPS. Thus, the exceptions
under sections 1848(a)(7)(B) and (D) of the Act are limited to the
meaningful use payment adjustment under section 1848(a)(7)(A) of the
Act and do not apply in the context of the MIPS.
Section 1848(q)(5)(F) of the Act provides, if there are not
sufficient measures and activities applicable and available to each
type of MIPS eligible clinician, the Secretary shall assign different
scoring weights (including a weight of zero) for each performance
category based on the extent to which the category is applicable to
each type of MIPS eligible clinician, and for each measure and activity
specified for each such category based on the extent to which the
measure or activity is applicable and available to the type of MIPS
eligible clinician.
We believe that under our proposals for the advancing care
information performance category of the MIPS, there may not be
sufficient measures that are applicable and available to certain types
of MIPS eligible clinicians as outlined in the proposed rule, some of
whom may have qualified for a statutory exception to the meaningful use
payment adjustment under section 1848(a)(7)(A) of the Act. For the
reasons stated in the proposed rule, we proposed to assign a weight of
zero to the advancing care information performance category for
purposes of calculating a MIPS final score for these MIPS eligible
clinicians. We refer readers to section II.E.6. of the proposed rule
for more information regarding how the quality, cost and improvement
activities performance categories would be reweighted.
(i) Hospital-Based MIPS Eligible Clinicians
Section 1848(a)(7)(D) of the Act exempts hospital-based EPs from
the application of the meaningful use payment adjustment under section
1848(a)(7)(A) of the Act. We defined a hospital-based EP for the EHR
Incentive Program under Sec. 495.4 as an EP who furnishes 90 percent
or more of his or her covered professional services in sites of service
identified by the codes used in the HIPAA standard transaction as an
inpatient hospital or emergency room setting in the year preceding the
payment year, or in the case of a payment adjustment year, in either of
the 2 years before the year preceding such payment adjustment year.
Under this definition, EPs that have 90 percent or more of payments for
covered professional services associated with claims with Place of
Service Codes 21 (inpatient hospital) or 23 (emergency department) are
considered hospital-based (75 FR 44442).
We believe there may not be sufficient measures applicable and
available to hospital-based MIPS eligible clinicians under our
proposals for the advancing care information performance category of
MIPS.
Hospital-based MIPS eligible clinicians may not have control over
the decisions that the hospital makes regarding the use of health IT
and CEHRT. These MIPS eligible clinicians therefore may have no control
over the type of CEHRT available, the way that the technology is
implemented and used, or whether the hospital continually invests in
the technology to ensure it is compliant with ONC certification
criteria. In addition, some of the specific advancing care information
performance category measures, such as the Provide Patient Access
measure under the Patient Electronic Access objective requires that
patients have access to view, download and transmit their health
information from the EHR which is made available by the health care
clinician, in this case the hospital. Thus the measure is more
attributable and applicable to the hospital and not to the MIPS
eligible clinician, as the hospital controls the availability of the
EHR technology. Further, the requirement under the Protect Patient
Health Information objective to conduct a security risk analysis, would
rely on the actions of the hospital, rather than the actions of the
MIPS eligible clinician, as the hospital controls the access and
availability and secure implementation of the EHR technology. In this
case, the measure is again more attributable and applicable to the
hospital than to the MIPS eligible clinician. Further, certain
specialists (such as pathologists, radiologists and anesthesiologists)
who often practice in a hospital setting and may be hospital-based MIPS
eligible clinicians often lack face-to-face interaction with patients,
and thus, may not have sufficient measures applicable and available to
them under our proposals. For example, hospital-based MIPS eligible
clinicians who lack face-to-face patient interaction may not have
patients for which they could transfer or create an electronic summary
of care record.
[[Page 77239]]
In addition, we noted that eligible hospitals and CAHs are subject
to meaningful use requirements under sections 1886(b)(3)(B) and (n) and
1814(l) of the Act, respectively, which were not affected by the
enactment of the MACRA. Eligible hospitals and CAHs are required to
report on objectives and measures of meaningful use under the EHR
Incentive Program, as outlined in the 2015 EHR Incentive Programs final
rule. We noted the objectives and measures of the EHR Incentive
Programs for eligible hospitals and CAHs are specific to these
facilities, and are more applicable and better represent the EHR
technology available in these settings.
For these reasons, we proposed to rely on section 1848(q)(5)(F) of
the Act to assign a weight of zero to the advancing care information
performance category for hospital-based MIPS eligible clinicians. We
proposed to define a ``hospital-based MIPS eligible clinician'' at
Sec. 414.1305 as a MIPS eligible clinician who furnishes 90 percent or
more of his or her covered professional services in sites of service
identified by the codes used in the HIPAA standard transaction as an
inpatient hospital or emergency room setting in the year preceding the
performance period, otherwise stated as the year 3 years preceding the
MIPS payment year. For example, under this proposal, hospital-based
determinations would be made for the 2019 MIPS payment year based on
covered professional services furnished in 2016. We also proposed,
consistent with the EHR Incentive Program, that we would determine
which MIPS eligible clinicians qualify as ``hospital-based'' for a MIPS
payment year. We invited comments on these proposals.
In addition, we sought comment on how the advancing care
information performance category could be applied to hospital-based
MIPS eligible clinicians in future years of MIPS, and the types of
measures that would be applicable and available to these types of MIPS
eligible clinicians.
We also sought comment on whether the previously established 90
percent threshold of payments for covered professional services
associated with claims with Place of Service (POS) Codes 21 (inpatient
hospital) or 23 (emergency department) is appropriate, or whether we
should consider lowering this threshold to account for hospital-based
MIPS eligible clinicians who bill more than 10 percent of claims with a
POS other than 21 or 23. Although we proposed a threshold of 90
percent, we are considering whether a lower threshold would be more
appropriate for hospital-based MIPS eligible clinicians. In particular,
we are interested in what factors should be applied to determine the
threshold for hospital-based MIPS eligible clinicians. We will continue
to evaluate the data to determine whether there are certain thresholds
which naturally define a hospital-based MIPS eligible clinician.
The following is a summary of the comments we received regarding
our proposal for defining hospital-based MIPS eligible clinicians.
Comment: Many commenters supported our proposed definition of a
hospital-based MIPS eligible clinician as those who furnish 90 percent
or more of his or her covered professional services in either Place of
Service 21 or 23. Many also supported the proposal to assign a weight
of zero to the advancing care information performance category for
hospital-based MIPS eligible clinicians, citing that health IT
decisions for these MIPS eligible clinicians are often made at the
hospital level and are out of their control.
Response: We thank commenters for their support of our proposal.
For the reasons stated in the proposed rule, and based on the measures
we are finalizing in this final rule with comment period, we agree that
there may not be sufficient measures applicable and available to
hospital-based MIPS eligible clinicians to report for the advancing
care information performance category.
Comment: A few commenters disagreed with our proposal and provided
alternate hospital-based thresholds. They recommended that the
threshold be lowered to a majority (or more than 50 percent). Several
commenters recommended a 75 percent threshold, while another suggested
reducing the threshold to 60 percent. One commenter recommended that
CMS adopt a flexible approach that accommodates eligible clinicians who
work in multiple settings.
Response: Although commenters suggested alternate thresholds, they
did not provide specific rationale to support the lowered thresholds or
the factors that should be applied to determine the threshold for
hospital-based MIPS eligible clinicians. With commenter feedback in
mind, we have reevaluated the data and found that historical claims
data support a lower threshold as suggested in these comments. With
consideration of the comments and data we have reviewed, we are
reducing the percentage of covered professional services furnished in
certain sites of service to determine hospital-based MIPS eligible
clinicians from 90 percent to 75 percent. The data analyzed supports
the comments we received while still allowing MIPS eligible clinicians
with 25 percent or more of their services in a settings outside of
inpatient hospital, on-campus outpatient hospital (as referenced below)
or emergency room settings to participate and earn points in the
advancing care information performance category.
Comment: Many commenters proposed that CMS broaden the definition
of ``hospital-based clinician'' to include those MIPS eligible
clinicians who are employed by a hospital, but still bill outpatient
services, as those MIPS eligible clinicians will not have input into
the selection of the EHR, pointing out that facility-based clinicians
in both inpatient and outpatient settings experience the similar
difficulties in meeting the proposed objectives and measures in the
advancing care information performance category. Another commenter
believed that CMS should include other clinician settings, such as
ambulatory surgery centers, with hospital inpatient and ED settings as
clinicians in other settings may also lack control over EHR technology.
Another urged CMS to revise the criteria to include care provided in
hospital outpatient departments and ASCs, excluding evaluation and
management services. One commenter supported our proposal for hospital-
based MIPS eligible clinicians and recommended that CMS also include
POS 22 (on-campus outpatient hospital) because many hospitalists
provide care in both the inpatient setting, as well as on-campus
outpatient hospital departments. Another commenter suggested that the
definition of hospital-based MIPS eligible clinicians include
observation services.
Response: We agree with commenters that there are MIPS eligible
clinicians who bill using place of service codes other than POS 21 and
POS 23 but who predominantly furnish covered professional services in a
hospital setting and have no control over EHR technology. We believe
these clinicians should be considered hospital-based for purposes of
MIPS, and therefore, we are expanding our hospital-based definition to
include POS 22, on-campus outpatient hospital.
Comment: One commenter recommended using the newly-introduced
Medicare specialty billing code for hospitalists in the definition of
``hospital-based.''
Response: The official use of the Medicare specialty billing code
for hospitals does not begin until after the start of the MIPS program,
and therefore we have no historical data to support its
[[Page 77240]]
inclusion in the definition of hospital-based at this time. We will
consider this recommendation for future rulemaking.
Comment: One commenter recommended that CMS describe this group of
MIPS eligible clinicians as facility-based rather than hospital-based.
Response: We appreciate the comment although we continue to believe
that hospital-based is the more appropriate term. We believe facility-
based is too broad a term and could be misleading.
Comment: A commenter requested that CMS be transparent about the
time period used for determining whether an MIPS eligible clinician is
hospital-based.
Response: We proposed to use data from the year preceding the
performance period, otherwise stated as the year that is 3 years
preceding the MIPS payment year. We are adopting a modified final
policy and will instead use claims with dates of service between
September 1 of the calendar year 2 years preceding the performance
period through August 31 of the calendar year preceding the performance
period. For example, for the 2017 performance period (2019 MIPS payment
year) we will use the data available at the end of October 2016 for
Medicare claims with dates of service between September 1, 2015,
through August 31, 2016, to determine whether a MIPS eligible clinician
is considered hospital-based by our definition. In the event that it is
not operationally feasible to use claims from this exact time period,
we will use a 12-month period as close as practicable to September 1 of
the calendar year 2 years preceding the performance period and August
31 of the calendar year preceding the performance period. We have
adopted this change in policy in an effort to provide transparency to
MIPS eligible clinicians; this change in timeline will allow us to
notify MIPS eligible clinicians of their hospital-based status prior to
the start of the performance period. By adopting this policy and
notifying MIPS eligible clinicians of their hospital-based
determination prior to the performance period, we enable MIPS eligible
clinicians to better plan and prepare for reporting.
Comment: One commenter noted that specialists who meet the criteria
for being considered a hospital-based MIPS eligible clinician may still
have access and the ability to effectively use CEHRT, and may
sufficiently meet the requirements of the advancing care information
performance category, while those MIPS eligible clinicians who do not
meet the hospital-based criteria as proposed would not be able to meet
those requirements. The commenter suggested taking this into
consideration and proposed allowing some MIPS eligible clinicians who
are not hospital-based, but who still face the same hardships, to
reweight and redistribute their advancing care information performance
category score.
Response: We realize that some MIPS eligible clinicians face
similar challenges around the inability to control their access to
CEHRT even if they are not determined to be hospital-based. We refer
readers to section II.E.5.g.(8)(a)(ii) of this final rule with comment
period for further discussion of reweighting applications for those
MIPS eligible clinicians who face a significant hardship.
Comment: Commenters recommended offering MIPS eligible clinicians
or groups the option to petition for a change in their hospital-based
status when there is a change in their organizational affiliation.
Response: We agree that circumstances change from year to year and
MIPS eligible clinicians' hospital-based determination should be
reevaluated for each MIPS payment year. We note that we are finalizing
a policy to determine hospital-based status for each MIPS payment year
by looking at a MIPS eligible clinician's covered professional services
based on claims with dates of service between September 1 of the
calendar year 2 years preceding the performance period through August
31 of the calendar year preceding the performance period. We appreciate
the suggestion that MIPS eligible clinicians should have the ability to
petition their hospital-based status. However, we believe this annual
reevaluation in combination with our policy that hospital-based MIPS
eligible clinicians may choose to report to the advancing care
information performance category should they determine that there are
applicable and available measures for them to submit allow sufficient
flexibility for hospital-based MIPS eligible clinicians without the
need to petition their hospital-based status.
After consideration of the public comments and the data we have
available, we are finalizing our proposal for MIPS under Sec. 414.1305
with the following modifications. Under the MIPS, a hospital-based MIPS
eligible clinicians is defined as a MIPS eligible clinician who
furnishes 75 percent or more of his or her covered professional
services in sites of service identified by the Place of Service (POS)
codes used in the HIPAA standard transaction as an inpatient hospital
(POS 21), on campus outpatient hospital (POS 22), or emergency room
(POS 23) setting, based on claims for a period prior to the performance
period as specified by CMS. We intend to use claims with dates of
service between September 1 of the calendar year 2 years preceding the
performance period through August 31 of the calendar year preceding the
performance period, but in the event it is not operationally feasible
to use claims from this time period, we will use a 12-month period as
close as practicable to this time period.
We note that this expanded definition of hospital-based MIPS
eligible clinician will include a greater number of MIPS eligible
clinicians than the previously proposed definition. We have expanded
this definition because we believe it better represents hospital-based
eligible clinicians and acknowledges the challenges they face with
regard to EHR reporting as stated above. For the reasons stated in the
proposed rule, our assumption remains that MIPS eligible clinicians who
are determined hospital-based do not have sufficient advancing care
information measures applicable to them, and thus we will reweight the
advancing care information performance category to zero percent of the
MIPS final score for the MIPS payment year in accordance with section
1848(q)(5)(F) of the Act. If a MIPS eligible clinician disagrees with
our assumption and believes there are sufficient advancing care
information measures applicable to them, they have the option to report
the advancing care information measures for the performance period for
the MIPS payment year for which they are determined hospital-based.
However, if a MIPS eligible clinician who is determined hospital-based
chooses to report on the advancing care information measures, they will
be scored on the advancing care information performance category like
all other MIPS eligible clinicians, and the performance category will
be given the weighting prescribed by section 1848(q)(5)(E) of the Act
regardless of their advancing care information performance category
score.
(ii) MIPS Eligible Clinicians Facing a Significant Hardship
Section 1848(a)(7)(B) of the Act provides that the Secretary may
exempt an EP who is not a meaningful EHR user for the EHR reporting
period for the year from the application of the payment adjustment
under section 1848(a)(7)(A) of the Act if the Secretary determines that
compliance with the requirements for being a meaningful EHR user would
result in a significant hardship. In the Stage 2 final rule (77 FR
54097-54100),
[[Page 77241]]
we defined certain categories of significant hardships that may prevent
an EP from meeting the requirements of being a meaningful EHR user.
These categories include:
Insufficient Internet Connectivity (as specified in 42 CFR
495.102(d)(4)(i)).
Extreme and Uncontrollable Circumstances (as specified in
42 CFR 495.102(d)(4)(iii)).
Lack of Control over the Availability of CEHRT (as
specified in 42 CFR 495.102(d)(4)(iv)(A)).
Lack of Face-to-Face Patient Interaction (as specified in
42 CFR 495.102(d)(4)(iv)(B)).
We believe that under our proposals for the advancing care
information performance category, there may not be sufficient measures
applicable and available to MIPS eligible clinicians within the
categories above. For these MIPS eligible clinicians, we proposed to
rely on section 1848(q)(5)(F) of the Act to re-weight the advancing
care information performance category to zero.
Sufficient internet access is fundamental to many of the measures
proposed for the advancing care information performance category. For
example, the e-Prescribing measure requires sufficient access to the
Internet to transmit prescriptions electronically, and the Secure
Messaging measure requires sufficient Internet access to receive and
respond to patient messages. These measures may not be applicable to
MIPS eligible clinicians who practice in areas with insufficient
internet access. We proposed to require MIPS eligible clinicians to
demonstrate insufficient internet access through an application process
in order to be considered for a reweighting of the advancing care
information performance category. The application would have to
demonstrate that the MIPS eligible clinicians lacked sufficient
internet access, during the performance period, and that there were
insurmountable barriers to obtaining such infrastructure, such as a
high cost of extending the internet infrastructure to their facility.
Extreme and uncontrollable circumstances, such as a natural
disaster in which an EHR or practice building are destroyed, can happen
at any time and are outside a MIPS eligible clinician's control. If a
MIPS eligible clinician's CEHRT is unavailable as a result of such
circumstances, the measures specified for the advancing care
information performance category may not be available for the MIPS
eligible clinician to report. We proposed that these MIPS eligible
clinicians submit an application to include the circumstances by which
the EHR technology was unavailable, and for what period of time it was
unavailable, to be considered for reweighting of their advancing care
information performance category.
In the Stage 2 final rule (77 FR 54100) we discussed EPs who
practice at multiple locations, and may not have the ability to impact
their practices' health IT decisions. We noted the case of surgeons
using ambulatory surgery centers or a physician treating patients in a
nursing home who does not have any other vested interest in the
facility, and may have no influence or control over the health IT
decisions of that facility. If MIPS eligible clinicians lack control
over the CEHRT in their practice locations, then the measures specified
for the advancing care information performance category may not be
available to them for reporting. To be considered for a reweighting of
the advancing care information performance category, we proposed that
these MIPS eligible clinicians would need to submit an application
demonstrating that a majority (50 percent or more) of their outpatient
encounters occur in locations where they have no control over the
health IT decisions of the facility, and request their advancing care
information performance category score be reweighted to zero. We noted
that in such cases, the MIPS eligible clinician must have no control
over the availability of CEHRT. Control does not imply final decision-
making authority. For example, we would generally view MIPS eligible
clinicians practicing in a large, group as having control over the
availability of CEHRT, because they can influence the group's purchase
of CEHRT, they may reassign their claims to the group, they may have a
partnership/ownership stake in the group, or any payment adjustment
would affect the group's earnings and the entire impact of the
adjustment would not be borne by the individual MIPS eligible
clinician. These MIPS eligible clinicians can influence the
availability of CEHRT and the group's earnings are directly affected by
the payment adjustment. Thus, such MIPS eligible clinicians would not,
as a general rule, be viewed as lacking control over the availability
of CEHRT and would not be eligible for their advancing care information
performance category to be reweighted based on their membership in a
group practice that has not adopted CEHRT.
In the Stage 2 final rule (77 FR 54099), we noted the challenges
faced by EPs who lack face-to-face interaction with patients (EPs that
are non-patient facing), or lack the need to provide follow-up care
with patients. Many of the measures proposed under the advancing care
information performance category require face-to-face interaction with
patients, including all eight of the measures that make up the three
performance score objectives (Patient Electronic Access, Coordination
of Care Through Patient Engagement and Health Information Exchange).
Because these proposed measures rely so heavily on face-to-face patient
interactions, we do not believe there would be sufficient measures
applicable to non-patient facing MIPS eligible clinicians under the
advancing care information performance category. We proposed to
automatically reweight the advancing care information performance
category to zero for a MIPS eligible clinician who is classified as a
non-patient facing MIPS eligible clinician (based on the number of
patient-facing encounters billed during a performance period) without
requiring an application to be submitted by the MIPS eligible
clinician. We refer readers to section II.E.1.b. of the proposed rule
for further discussion of non-patient facing MIPS eligible clinicians.
We also sought comment on how the advancing care information
performance category could be applied to non-patient facing MIPS
eligible clinicians in future years of MIPS, and the types of measures
that would be applicable and available to these types of MIPS eligible
clinicians.
We proposed that all applications for reweighting the advancing
care information performance category be submitted by the MIPS eligible
clinician or designated group representative in the form and manner
specified by CMS. We proposed that all applications may be submitted on
a rolling basis, but must be received by us no later than the close of
the submission period for the relevant performance period, or a later
date specified by us. For example, for the 2017 performance period,
applications must be submitted no later than March 31, 2018 (or later
date as specified by us) to be considered for reweighting the advancing
care information performance category for the 2019 MIPS payment year.
An application would need to be submitted annually to be considered for
reweighting each year.
The following is a summary of comments received.
Comment: Most commenters supported the inclusion of something
similar to a hardship exception under the EHR Incentive Program for the
advancing care information performance category and the reweighting of
the advancing care information score to zero. Other commenters
expressed appreciation that CMS has moved away
[[Page 77242]]
from the 5 year limitation to hardship exceptions.
Response: We appreciate the support of our proposal, and note that
we did not propose exceptions from reporting on the advancing care
information performance category or from application of the MIPS
payment adjustment factor based on hardship. Rather, we are recognizing
that there may not be sufficient measures applicable and available
under the advancing care information performance category to MIPS
eligible clinicians who lack sufficient internet connectivity, face
extreme and uncontrollable circumstances, lack control over the
availability of CEHRT, or do not have face-to-face interactions with
patients. For those MIPS eligible clinicians, we proposed to reweight
the advancing care information performance category to zero percent in
the MIPS final score.
Comment: We received many comments suggesting various additions to
our proposal. One commenter suggested hardship exceptions under the
advancing care information performance category for both 2017 and 2018
for practices that are experiencing transitional, infrastructural
changes. One commenter suggested expanding the exceptions for
unforeseen circumstances to a minimum of 5 years. Another requested
that one of the hardship categories for the 2017 performance period
include the lateness of the publication of the final rule with comment
period, which will create a short timeline for adjustment to new
requirements. A commenter strongly recommended that hospitalist be
added to the list because they do the majority of their work in a
hospital.
Response: We note that, in some cases, transitional infrastructure
changes might be considered under the extreme and uncontrollable
circumstances category, depending upon the particular circumstances of
the clinician practice. We believe that it is necessary for MIPS
eligible clinicians to submit an application to reweight their
advancing care information performance category score to zero for each
applicable year. We do not believe it is appropriate to automatically
reweight to zero the advancing care information performance category
score for a span of multiple years as circumstances change year to
year. We believe that our policy to allow a minimum of 90-days data for
the transition year of MIPS helps to address any issues related to the
timing of the release of this final rule with comment period. We refer
readers to section II.E.4. of this final rule with comment period for
further discussion of the MIPS performance period. Finally we note that
hospital medicine is not a clinician specialty that is identified
through the Medicare enrollment process. Those MIPS eligible clinicians
that are considered hospital-based by our definition would have their
advancing care information performance category weighted at zero
percent of the MIPS final score as was previously discussed in this
final rule with comment period.
Comment: Many commenters suggested additional categories related to
CEHRT. One commenter asked CMS to create hardship exceptions to ensure
that clinicians are not unfairly punished for the failures of their
CEHRT, citing concerns of past failures with technologies in meeting
standards imposed by CMS and ONC. Yet another commenter recommended
that we consider expanding the criteria for 2017 and 2018 to include
specific clinician types that can prove that they would incur major
administrative and financial burdens by adopting EHR technology for the
first and second performance period. Another commenter suggested that
exceptions be developed to avoid negative payment adjustments in 2019
for EHR migration difficulties. Other commenters suggested exception
for switching CEHRT and providing hardships when CEHRT is decertified.
Response: We appreciate this input and understand that there may be
many issues related to CEHRT that may result in a MIPS eligible
clinician being unable to report on measures under the advancing care
information performance category due to circumstances outside of their
control. As we do not want to limit potential unforeseen circumstances
we will consider issues with vendors and CEHRT under the ``extreme and
uncontrollable circumstances'' category, but we note that not all
issues may qualify as extreme and outside of control of the clinician.
Comment: One commenter supported continued hardship exceptions for
clinicians who practice in settings such as skilled nursing facilities
where they do not have control over availability of CEHRT, however they
also believe this proposal does not go far enough. The commenter
explained that without a hardship exception granted, these facilities
will be encouraged to limit the number of patients seen by their
clinicians so that they can avoid being eligible to participate in
MIPS, which would adversely affect the access to care provided to this
vulnerable population. They requested that skilled nursing facility
visits (POS 31) and nursing facility visits (POS 32) (CPT codes 99304-
99318) simply be exempt from meaningful use, and by extension the
advancing care information performance category.
Response: While we acknowledge this issue, we believe that it is
adequately addressed by the ``lack of control over CEHRT'' category and
does not warrant the exemption of certain evaluation and management
codes. As we have noted previously, this final rule with comment period
only addresses policies related to MIPS eligible clinicians and not
Medicaid EPs, eligible hospitals or CAHs under the Medicare and
Medicaid EHR Incentive Programs.
Comment: Other commenters believed that CMS should continue a
hardship exception for medical centers because the medical centers will
have to monitor more programs requiring some but less of the same data.
The commenters stated that the processes are confusing and time-
consuming.
Response: We currently do not allow a hardship exception specific
to medical centers under the EHR Incentive Program. Medical centers are
not subject to the application of the MIPS payment adjustment factors
and are not addressed in this rulemaking.
Comment: A few commenters requested that, as was included in the
Medicare and Medicaid EHR Incentive Programs, an automatic hardship
exception be granted to the following PECOS specialties: diagnostic
radiology (30), nuclear medicine (36), interventional radiology (94),
anesthesiology (05) and pathology (22).
Response: We disagree that we should reweight to zero the advancing
care information performance category score based on specialty code,
and note that our proposal and final policy for reweighting the
advancing care information performance category is based on the number
of patient-facing encounters billed during a performance period, not
based on specialty type. In the EHR Incentive Programs, we offered an
exception to the Medicare payment adjustments to certain specialties as
designated in PECOS because we recognized that EPs within the
specialties that lack face-to-face interactions and lack follow up with
patients with sufficient frequency (77 FR 54099-54100). Under the MIPS,
we proposed to automatically reweight the advancing care information
performance category to zero for any hospital-based MIPS eligible
clinicians and/or non-patient facing MIPS eligible clinicians who may
not have sufficient measures applicable and available to them. Some of
the MIPS eligible clinicians in specialties referenced by the commenter
may have sufficient patient encounters to report the measures under the
advancing care information performance
[[Page 77243]]
category, and thus, the advancing care information performance category
measures would be applicable to these MIPS eligible clinicians.
Comment: A commenter suggested that CMS publish an explanation of
what constitutes ``limited'' internet access and list limited access
areas per the Federal Communications Commission (FCC).
Response: We have stated that MIPS eligible clinicians located in
an area without sufficient Internet access to comply with objectives
requiring Internet connectivity, and faced insurmountable barriers to
obtaining such Internet connectivity could be apply for significant
hardship. The FCC's National Broadband Map allows MIPS eligible
clinicians to search, analyze, and map broadband availability in their
area: http://www.broadbandmap.gov/.
Comment: One commenter recommended a new option to allow
applications to reweight advancing care information performance
category to zero for MIPS eligible clinicians who did not previously
intend to participate in meaningful use in CY 2017, and instead planned
to obtain a significant hardship to avoid the Electronic Health Record
Incentive Program 2019 payment adjustment.
Response: We note that under section 101(b)(1) of the MACRA, the
payment adjustments under the Medicare EHR incentive program will end
after the 2018 payment adjustment year, which is based on the EHR
reporting period in 2016. Therefore, MIPS eligible clinicians are not
required to participate in the Medicare EHR incentive programs in the
2017 EHR reporting period to avoid a 2019 payment adjustment. MIPS
eligible clinicians may qualify for reweighting of their advancing care
information performance category score if they meet the criteria
outlined in our policy for reweighting under MIPS.
Comment: A commenter recommended that CMS explicitly clarify that
the ``lack of influence over the availability of CEHRT'' option for
reweighting advancing care information performance category to zero is
not limited to multi-location/practice MIPS eligible clinicians.
Response: The ``lack of control over the availability of CEHRT'' is
not limited to MIPS eligible clinicians who practice at multiple
locations, instead, it is available to any MIPS eligible clinicians who
may not have the ability to impact their practices' health IT
decisions. We noted that in such cases, the MIPS eligible clinician
must have no control over the availability of CEHRT. We further
specified that a majority (50 percent or more) of their outpatient
encounters must occur in locations where they have no control over the
health IT decisions of the facility. Control does not imply final
decision-making authority as demonstrated in the example given in our
proposal.
Comment: A commenter recommended granting MIPS eligible clinicians
that are eligible for Social Security benefits a hardship exception
because of the considerable expenditures of both human and financial
capital that would require several years to see a return on investment.
Response: While we understand this suggestion, we do not believe
that it is appropriate to reweight this category solely on the basis of
a MIPS eligible clinicians' age or Social Security status. We have
analyzed EHR Incentive Program data, as well as provider feedback, and
believe that while other factors such as the lack of access to CEHRT or
unforeseen environmental circumstances may constitute a significant
hardship, the age of an MIPS eligible clinician alone or the preference
to not obtain CEHRT does not.
Comment: Commenters requested that application for reweighting not
be burdensome for MIPS eligible clinicians to submit. One commenter
requested that CMS clarify whether MIPS eligible clinicians will need
to submit an annual application to be excluded from the advancing care
information performance category or if this will occur automatically
and the commenter preferred the latter.
Response: We noted that CMS would specify the form and manner that
reweighting applications are submitted outside the rulemaking process.
Additional information on the submission process will be available
after the rule is published. We do note that if an application is
required, it must be submitted annually.
Comment: Some commenters stated that MIPS eligible clinicians, who
did not qualify for meaningful use, will need more time to familiarize
themselves with EHR and could receive a low MIPS final score and
negative payment adjustment due to lack of CEHRT. They believed that
these MIPS eligible clinicians most likely serve high-disparity
populations and that the most vulnerable patient populations could be
negatively impacted.
Response: We acknowledge that under MIPS more clinicians will be
subject to the requirements of EHR reporting than were previously
eligible under the EHR Incentive Program and may not have advancing
care information measures that are applicable or available for them to
submit. For this reason, we have proposed to reweight the advancing
care information performance category to zero for hospital-based MIPS
eligible clinicians, NPs, PAs, CRNAs and CNSs. We have also allowed for
MIPS eligible clinicians to apply for a reweighting of their advancing
care information performance category score should the MIPS eligible
clinician not have measures that are applicable or available to them
for various reasons as discussed in section II.E.5.g. of this final
rule with comment period. We do not agree that MIPS eligible clinicians
who were not eligible for the EHR Incentive Programs are concentrated
in high disparity populations, nor do we believe that serving such a
population would limit a MIPS eligible clinician's ability to report on
the advancing care information objectives and measures.
After consideration the comments, we are finalizing our policy to
re-weight the advancing care information performance category to zero
percent of the MIPS final score for MIPS eligible clinicians facing a
significant hardships as proposed. For the reasons discussed in the
proposed rule, we continue to assume that these clinicians may not have
sufficient measures applicable and available to them for the advancing
care information performance category. Should a MIPS eligible clinician
apply for their advancing care information performance category to be
reweighted under this policy but subsequently determine that their
situation has changed such that they believe there are sufficient
measures applicable and available to them for the advancing care
information performance category, they may report on the measures. If
they choose to report, they will be scored on the advancing care
information performance category like any other MIPS eligible
clinician, and the category will be given the weighting prescribed by
section 1848(q)(5)(E) of the Act regardless of the MIPS eligible
clinician's advancing care information performance category score.
(iii) Nurse Practitioners, Physician Assistants, Clinical Nurse
Specialists, and Certified Registered Nurse Anesthetists
The definition of a MIPS eligible clinician under section
1848(q)(1)(C) of the Act includes certain non-physician practitioners,
including Nurse Practitioners (NPs), Physicians Assistants (PAs),
Certified Registered Nurse Anesthetists (CRNAs) and Clinical Nurse
Specialists (CNSs)). CRNAs and CNSs are not eligible for the incentive
payments under Medicare or
[[Page 77244]]
Medicaid for the adoption and meaningful use of CEHRT (sections 1848(o)
and 1903(t) of the Act, respectively) or subject to the meaningful use
payment adjustment under Medicare (section 1848(a)(7)(A) of the Act),
and thus, they may have little to no experience with the adoption or
use of CEHRT. Similarly, NPs and PAs may also lack experience with the
adoption or use of CEHRT, as they are not subject to the payment
adjustment under section 1848(a)(7)(A) of the Act. We further noted
that only 19,281 NPs and only 1,379 PAs have attested to the Medicaid
EHR Incentive Program. Nurse practitioners are eligible for the
Medicaid incentive payments under section 1903(t) of the Act, as are
PAs practicing in a FQHC or a RHC that is led by a PA, if they meet
patient volume requirements and other eligibility criteria.
Because many of these non-physician clinicians are not eligible to
participate in the Medicare and/or Medicaid EHR Incentive Program, we
have little evidence as to whether there are sufficient measures
applicable and available to these types of MIPS eligible clinicians
under our proposals for the advancing care information performance
category. The low numbers of NPs and PAs who have attested for the
Medicaid incentive payments may indicate that EHR Incentive Program
measures required to earn the incentive are not applicable or
available, and thus, would not be applicable or available under the
advancing care information performance category. For these reasons, we
proposed to rely on section 1848(q)(5)(F) of the Act to assign a weight
of zero to the advancing care information performance category if there
are not sufficient measures applicable and available to NPs, PAs,
CRNAs, and CNSs. We would assign a weight of zero only in the event
that an NP, PA, CRNA, or CNS does not submit any data for any of the
measures specified for the advancing care information performance
category. We encourage all NPs, PAs, CRNAs, and CNSs to report on these
measures to the extent they are applicable and available, however, we
understand that some NPs, PAs, CRNAs, and CNSs may choose to accept a
weight of zero for this performance category if they are unable to
fully report the advancing care information measures. We believe this
approach is appropriate for the first MIPS performance period based on
the payment consequences associated with reporting, the fact that many
of these types of MIPS eligible clinicians may lack experience with EHR
use, and our current uncertainty as to whether we have proposed
sufficient measures that are applicable and available to these types of
MIPS eligible clinicians. We noted that we would use the first MIPS
performance period to further evaluate the participation of these MIPS
eligible clinicians in the advancing care information performance
category and would consider for subsequent years whether the measures
specified for this category are applicable and available to these MIPS
eligible clinicians.
We invited comments on our proposal. We additionally sought comment
on how the advancing care information performance category could be
applied to NPs, PAs, CRNAs, and CNSs in future years of MIPS, and the
types of measures that would be applicable and available to these types
of MIPS eligible clinicians.
The following is a summary of the comments we received regarding
our proposal.
Comment: Commenters generally supported our proposal to reweight
the advancing care information performance category for those MIPS
eligible clinicians without sufficient measures. Most commenters
supported CMS' proposal that submission under the advancing care
information performance category for NPs, PAs, CNSs, and CRNAs, would
be optional in 2017 given these non-physicians' lack of past
participation in meaningful use.
Response: We appreciate commenters for their support of this
proposal and we agree for the reasons stated in the proposed rule that
it is appropriate to assign a weight of zero only if the aforementioned
practitioners do not submit data for any of the advancing care
information performance category measures.
Comment: One commenter urged CMS to revise the proposed rule so
that NPs and advanced practice nurses (APNs) can obtain EHR Incentive
Program incentives.
Response: This final rule with comment period implements the MIPS
as authorized under section 1848(q) of the Act. Eligibility for
incentive payments under the EHR Incentive Program is determined under
a separate section of the statute. Any change to the eligibility or
extension of incentive payments under the EHR Incentive Program would
require a change to the law and is not in the scope of this final rule
with comment period.
Comment: One commenter requested CMS make advancing care
information performance category participation optional for clinicians
who primarily provide services in post-acute care settings, which have
not been part of the EHR Incentive Program in the past. Several
commenters supported excluding clinicians not eligible to participate
in the Medicare/Medicaid EHR Incentive Programs.
Response: While we understand the concerns of the commenters, we
disagree with their suggestions. Section 1848(q)(1)(C)(i) of the Act
defines a MIPS eligible clinician to include specific types of
clinicians and provides discretion to include other types of clinicians
in later years. In the future, we expect additional clinician types
will be added to the definition of MIPS eligible clinician.
Comment: A commenter noted that by allowing additional non-
physician practitioners (NPs, PAs, and in the future, dietitians, etc.)
to be eligible to participate in the advancing care information
performance category, the number of eligible clinicians under MIPS will
greatly increase from the number of eligible clinicians in the EHR
Incentive Program. The increased number of eligible clinicians will
cause an unnecessary burden for organizational support staff to track
and report their data. Commenters recommend advancing care information
performance category data reporting be rolled up to the clinicians that
they bill under so that clinician reporting includes data representing
their MIPS eligible clinicians.
Response: As we noted above, the definition of MIPS eligible
clinician is broader than the definition of an EP in the EHR Incentive
Program, and we intend to add additional clinician types to the
definition of MIPS eligible clinician in future years. Under this
program, we have added a group reporting option in which MIPS eligible
clinicians who have reassigned their billing rights to a TIN may report
at the group or TIN level instead of the individual level. We believe
this addresses the administrative concerns raised by this comment and
allows MIPS eligible clinicians to aggregate their data for reporting,
therefore reducing reporting burden.
After consideration of the comments, we are finalizing our NPs,
PAs, CRNAs, and CNSs policy as proposed. These MIPS eligible clinicians
may choose to submit advancing care information measures should they
determine that these measures are applicable and available to them;
however, we note that if they choose to report, they will be scored on
the advancing care information performance category like all other MIPS
eligible clinicians and the performance category will be given the
weighting prescribed by section 1848(q)(5)(E) of the Act regardless of
[[Page 77245]]
their advancing care information performance category score.
(iv) Medicaid
In the 2015 EHR Incentive Programs final rule we adopted an
alternate method for demonstrating meaningful use for certain Medicaid
EPs that would be available beginning in 2016, for EPs attesting for an
EHR reporting period in 2015 (80 FR 62900). Certain Medicaid EPs who
previously received an incentive payment under the Medicaid EHR
Incentive Program, but failed to meet the eligibility requirements for
the program in subsequent years, are permitted to attest using the CMS
Registration and Attestation system for the purpose of avoiding the
Medicare payment adjustment (80 FR 62900). However, as discussed in the
proposed rule, section 101(b)(1)(A) of the MACRA amended section
1848(a)(7)(A) of the Act to sunset the meaningful use payment
adjustment for Medicare EHR Incentive Program EPs at the end of CY
2018. This means that after the CY 2018 payment adjustment year, there
will no longer be a separate Medicare EHR Incentive Program for EPs,
and therefore Medicaid EPs who may have used this alternate method for
demonstrating meaningful use cannot potentially be subject to a payment
adjustment under the Medicare EHR Incentive Program at that time.
Accordingly, there will no longer be a need for this alternate method
of demonstrating meaningful use after the CY 2018 payment adjustment
year.
Similarly, beginning in 2014, states were required to collect,
upload and submit attestation data for Medicaid EPs for the purposes of
demonstrating meaningful use to avoid the Medicare payment adjustment
(80 FR 62915). This form of reporting will also no longer need to
continue with the sunset of the meaningful use payment adjustment for
Medicare EHR Incentive Program EPs at the end of CY 2018. Accordingly,
we proposed to amend the reporting requirement described at 42 CFR
495.316(g) by adding an ending date such that after the CY 2018 payment
adjustment year states would no longer be required to report on
meaningful EHR users.
We noted that the Medicaid EHR Incentive Program for EPs was not
impacted by the MACRA and the requirement under section 1848(q) of the
Act to establish the MIPS program. We did not propose any changes to
the objectives and measures previously established in rulemaking for
the Medicaid EHR Incentive Program, and thus, EPs participating in that
program must continue to report on the objectives and measures under
the guidelines and regulations of that program.
Accordingly, reporting on the measures specified for the advancing
care information performance category under MIPS cannot be used as a
demonstration of meaningful use for the Medicaid EHR Incentive
Programs. Similarly, a demonstration of meaningful use in the Medicaid
EHR Incentive Programs cannot be used for purposes of reporting under
MIPS.
Therefore, MIPS eligible clinicians who are also participating in
the Medicaid EHR Incentive Programs must report their data for the
advancing care information performance category through the submission
methods established for MIPS in order to earn a score for the advancing
care information performance category under MIPS and must separately
demonstrate meaningful use in their state's Medicaid EHR Incentive
Program in order to earn a Medicaid incentive payment. The Medicaid EHR
Incentive Program continues through payment year 2021, with 2016 being
the final year an EP can begin receiving incentive payments (Sec.
495.310(a)(1)(iii)). We solicited comments on alternative reporting or
proxies for EPs who provide services to both Medicaid and Medicare
patients and are eligible for both MIPS and the Medicaid EHR Incentive
Payment.
The following is a summary of the comments we received regarding
our proposal to separate the reporting requirements of MIPS and the
Medicaid EHR Incentive Programs:
Comment: Many commenters stated the reporting burden imposed on
MIPS eligible clinicians who also participate in the Medicaid EHR
Incentive Programs, would have to report separately to achieve points
in the advancing care information performance category, and to receive
an incentive payment in the Medicaid EHR Incentive Programs. Some
commenters urged CMS to align reporting requirements and submission
methods across both programs to eliminate duplication in reporting
effort. Some commenters requested that CMS eliminate the need to report
duplicative quality measures by modifying its proposal to require that
if quality is reported in a manner acceptable under MIPS or an APM,
then it would not need to be reported under the Medicaid EHR Incentive
Program. Other commenters expressed concern that varying reporting
requirements for MIPS eligible clinicians, for hospitals and Medicaid
EPs who participate in the EHR Incentive Programs will bring hardship
to clinician staff, as well as EHR vendors.
Response: We understand that reporting burden is a concern to MIPS
eligible clinicians and CMS remains committed to exploring
opportunities for alignment when possible. However, MIPS and the
Medicare and Medicaid EHR Incentive Program are two separate programs
with distinct requirements. The reporting requirements and scoring
methods of the Medicaid EHR Incentive Program and those finalized for
the advancing care information performance category in the MIPS program
differ significantly. For example, in the Medicaid EHR Incentive
Programs, EPs must report on all objectives and meet measure thresholds
finalized in the 2015 EHR Incentive Programs final rule. In the
advancing care information performance category, MIPS eligible
clinicians must report on objectives and measures, but are not required
to meet measure thresholds to be considered a meaningful EHR user.
We remind commenters that while MIPS eligible clinicians would be
required to meet the requirements of the advancing care information
performance category to earn points toward their MIPS final score,
there is no longer a requirement that EPs demonstrate meaningful use
under the Medicaid EHR incentive program as a way to avoid the Medicare
EHR payment adjustments. However, MIPS eligible clinicians who meet the
Medicaid EHR Incentive Program eligibility requirements are encouraged
to additionally participate in the Medicaid EHR Incentive Program to be
eligible for Medicaid incentive payments through program year 2021.
Comment: A few commenters proposed that MIPS eligible clinicians
who are participating in the Medicaid EHR Incentive Program be exempted
from reporting to MIPS until after the completion of their final EHR
performance period. Others proposed allowing clinicians to choose
either to report in the Medicaid EHR Incentive Program or the advancing
care information performance category of MIPS. One commenter suggested
awarding MIPS eligible clinicians 30 points toward the advancing care
information performance category score if they successfully attest to
meaningful use in the Medicaid EHR Incentive Program.
Response: As previously mentioned, objective and measure
requirements of the Medicaid EHR Incentive Program and those finalized
for the advancing care information performance category in the MIPS
program vary too greatly to enable one to serve as proxy for another.
We are finalizing our Medicaid policy as proposed.
[[Page 77246]]
h. APM Scoring Standard for MIPS Eligible Clinicians Participating in
MIPS APMs
Under section 1848(q)(1)(C)(ii) of the Act, as added by section
101(c)(1) of MACRA and as discussed in section II.F.5. of this final
rule with comment period, Qualifying APM Participants (QPs) are not
MIPS eligible clinicians and are thus excluded from MIPS payment
adjustments. Partial Qualifying APM Participants (Partial QPs) are also
not MIPS eligible clinicians unless they opt to report and be scored
under MIPS. All other eligible clinicians participating in APMs who are
MIPS eligible clinicians are subject to MIPS requirements, including
reporting requirements and payment adjustments. However, most current
APMs already assess their participants on cost and quality of care and
require engagement in certain care improvement activities.
We proposed at Sec. 414.1370 to establish a scoring standard for
MIPS eligible clinicians participating in certain types of APMs (``APM
scoring standard'') to reduce participant reporting burden by
eliminating the need for such APM eligible clinicians to submit data
for both MIPS and their respective APMs. In accordance with section
1848(q)(1)(D)(i) of the Act, we proposed to assess the performance of a
group of MIPS eligible clinicians in an APM Entity that participates in
certain types of APMs based on their collective performance as an APM
Entity group, as defined at Sec. 414.1305.
In addition to reducing reporting burden, we sought to ensure that
eligible clinicians in APM Entity groups are not assessed in multiple
ways on the same performance activities. For instance, performance on
the generally applicable cost measures under MIPS could contribute to
upward or downward adjustments to payments under MIPS in a way that is
not aligned with the strategy in an ACO initiative for reducing total
Medicare costs for a specified population of beneficiaries attributed
through the unique ACO initiative's attribution methodology. Depending
on the terms of the particular APM, we believe similar misalignments
could be common between the MIPS quality and cost performance
categories and the evaluation of quality and cost in APMs. We believe
requiring eligible clinicians in APM Entity groups to submit data, be
scored on measures, and be subject to payment adjustments that are not
aligned between MIPS and an APM could potentially undermine the
validity of testing or performance evaluation under the APM. We also
believe imposition of these requirements would result in reporting
activity that provides little or no added value to the assessment of
eligible clinicians, and could confuse eligible clinicians as to which
CMS incentives should take priority over others in designing and
implementing care activities.
We proposed to apply the APM scoring standard to MIPS eligible
clinicians in APM Entity groups participating in certain APMs (``MIPS
APMs'') that meet the criteria listed below (and would be identified as
``MIPS APMs'' on the CMS Web site). In the proposed rule, we defined
the proposed criteria for MIPS APMs, the MIPS performance period for
APM Entity groups, the proposed MIPS scoring methodology for APM Entity
groups, and other information related to the APM scoring standard (81
FR 28234-28247).
(1) Criteria for MIPS APMs
We proposed at Sec. 414.1370 to specify that the APM scoring
standard under MIPS would only be applicable to eligible clinicians
participating in MIPS APMs, which we proposed to define as APMs (as
defined in section II.F.4. of the proposed rule) that meet the
following criteria: (1) APM Entities participate in the APM under an
agreement with CMS; (2) the APM requires that APM Entities include at
least one MIPS eligible clinician on a Participation List; and (3) the
APM bases payment incentives on performance (either at the APM Entity
or eligible clinician level) on cost/utilization and quality measures.
We understood that under some APMs the APM Entity may enter into
agreements with clinicians or entities that have supporting or
ancillary roles to the APM Entity's performance under the APM, but are
not participating under the APM Entity and therefore are not on a
Participation List. We proposed not to consider eligible clinicians
under such arrangements to be participants for purposes of the APM
Entity group to which the APM scoring standard would apply. We also
proposed that the APM scoring standard would not apply for certain APMs
in which the APM Entities participate under statute or our regulations
rather than under an agreement with us. We solicited comments on how
the APM scoring standard should apply to those APMs as well.
The criteria for the identification of MIPS APMs are independent of
the criteria for Advanced APM determinations discussed in section
II.F.4. of this final rule with comment period, so a MIPS APM may or
may not also be an Advanced APM. As such, it would be possible that an
APM meets all three proposed criteria to be a MIPS APM, but does not
meet the Advanced APM criteria described in section II.F.4. of this
final rule with comment period. Conversely, it would be possible that
an Advanced APM does not meet the criteria listed above because it does
not include MIPS eligible clinicians as participants.
The APM scoring standard would not apply to MIPS eligible
clinicians involved in APMs that include only facilities as
participants. APMs that do not base payment on cost/utilization and
quality measures also would not meet the proposed criteria for the APM
scoring standard. Instead, MIPS eligible clinicians participating in
these APMs would need to meet the generally applicable MIPS data
submission requirements for the MIPS performance period, and their
performance would be assessed using the generally applicable MIPS
standards, either as individual eligible clinicians or as a group under
MIPS.
As we explained in the proposed rule, we believe the proposed APM
scoring standard would help alleviate certain duplicative, unnecessary,
or competing data submission requirements for MIPS eligible clinicians
participating in MIPS APMs. However, we were interested in public
comments on alternative methods that could reduce MIPS data submission
requirements to enable MIPS eligible clinicians participating in
Advanced APMs to maximize their focus on the care delivery redesign
necessary to succeed within the Advanced APM while maintaining the
statutory framework that excludes only certain eligible clinicians from
MIPS and reducing reporting burden on Advanced APM participants.
We proposed that the APM scoring standard would not apply to MIPS
eligible clinicians participating in APMs that are not MIPS APMs.
Rather, such MIPS eligible clinicians would submit data to MIPS and
have their performance assessed either as an individual MIPS eligible
clinician or group as described in section II.E.2 of this final rule
with comment period. Some APMs may involve certain types of MIPS
eligible clinicians that are affiliated with an APM Entity but not
included in the APM Entity group because they are not participants of
the APM Entity. We proposed that even if the APM meets the criteria to
be a MIPS APM, MIPS eligible clinicians who are not included in the
MIPS APM Participation List would not be considered part of the
participating APM Entity group for purposes of the
[[Page 77247]]
APM scoring standard. For instance, MIPS eligible clinicians in the
Next Generation ACO Model might be involved in the APM through a
business arrangement with the APM Entity as ``preferred providers'' but
are not directly tied to beneficiary attribution or quality measurement
under the APM.
The following is a summary of the comments we received regarding
our proposals for the criteria for an APM to be a MIPS APM, and for the
APM scoring standard to apply only to MIPS eligible clinicians who are
included in the APM Entity group on a MIPS APM Participation List.
Comment: A commenter sought clarity on the term ``MIPS APM''.
Response: The term ``MIPS APM'' is used to describe an APM that
meets the three criteria for purposes of the APM scoring standard: (1)
APM Entities participate in the APM under an agreement with CMS; (2)
the APM requires that APM Entities include at least one MIPS eligible
clinician on a Participation List; and (3) the APM bases payment
incentives on performance (either at the APM Entity or eligible
clinician level) on cost/utilization and quality measures. Individuals
and groups that do not participate in MIPS APMs will be scored under
the generally applicable MIPS scoring standards. We note that the APM
scoring standard has no bearing on the QP determination for eligible
clinicians in Advanced APMs.
Comment: Some commenters stated that the definition of MIPS APMs is
too limiting and prevents eligible clinicians in APMs that are not
considered MIPS APMs from reporting as APM Entities. Other commenters
indicated that basing payment on quality measures should not be a MIPS
APM criterion.
Response: We continue to believe the criteria we proposed for a
MIPS APM will appropriately identify APMs in which the eligible
clinicians would be subject to potentially duplicative and conflicting
incentives and reporting requirements if they were required to report
and be scored under the generally applicable MIPS standard. The
eligible clinicians in a MIPS APM that is not also an Advanced APM are
considered MIPS eligible clinicians and are subject to MIPS reporting
requirements and payment adjustments (unless they are otherwise
excluded). The eligible clinicians in a MIPS APM that is an Advanced
APM are also considered MIPS eligible clinicians unless they meet the
threshold to be a QP for a year. In any MIPS APM, whether or not it is
also an Advanced APM, eligible clinicians may already be required to
report on the quality, cost and other measures on which their
performance is assessed as part of their participation in the APM,
leading to potentially duplicative or conflicting reporting under MIPS.
Additionally, eligible clinicians in these MIPS APMs already have
payment incentives tied to performance on quality and cost/utilization
measures, creating the potential for conflicting assessments based on
the same or similar data. Although other APMs may have similar
reporting requirements to the MIPS APMs such that there is some level
of duplicative reporting, unless an APM includes performance metrics
tied to payment incentives in the APM, we do not believe there is the
same potential for duplication and conflict. We continue to believe
that eligible clinicians in APMs that meet all three of the criteria to
be MIPS APMs would face a substantial level of duplication and/or
conflict between reporting and assessment under the APM and the
generally applicable MIPS standard. In addition, the participants in
other APMs may not be subject to MIPS at all because the participants
are not MIPS eligible clinicians. To the extent that eligible
clinicians do participate in APMs that are not MIPS APMs, we believe
they would often be in a position to consider group reporting options
under MIPS.
Comment: A few commenters suggested CMS simplify MIPS reporting and
scoring by requiring no additional reporting requirements for any MIPS
eligible clinicians in MIPS APMs to receive a MIPS final score. One
commenter stated the APM Scoring Standard does not go far enough to
reduce reporting burden because APM participants will still be required
to report improvement activities and advancing care information.
Response: We believe the proposed policy included meaningful
reductions in reporting burden for MIPS APM participants. The
additional policies we are finalizing in this rule (such as assigning a
MIPS APM improvement activities score) will reduce this burden further.
However, we do not believe it would be feasible to fully eliminate
reporting requirements for MIPS APM participants while adhering to the
core goals and structure of MIPS.
Comment: A few commenters stated it is untenable to require
physician groups to simultaneously pursue quality metrics, reduce
costs, and build the infrastructure required to participate in APMs and
MIPS. A few commenters indicated that the APM scoring standard may
undermine the intent of the statute to have eligible clinicians join
APMs by not providing sufficient reductions in burden under MIPS.
Another commenter recommended that the third MIPS APM criterion be
changed to ``the APM bases payment incentives on performance on cost/
utilization and/or quality measures'' instead of requiring that the APM
base payment incentives on both cost/utilization and quality measures.
Several commenters recommended that CMS make QP determinations early
enough so that eligible clinicians participating in Advanced APMs would
know in advance of the MIPS submission period whether they are QPs for
the year and, as such would not have to report to MIPS at all. One
commenter did not support implementation of the APM scoring standard
because the commenter stated that the proposal was confusing and may
incentivize physicians to remain in the FFS program rather than
progress towards APMs.
Response: We recognize that MIPS APM participants are diligently
working to provide high quality, cost-effective care to their patients.
We also recognize the burden of reporting to more than one CMS program.
We proposed to adopt the APM scoring standard with the intent of
reducing the reporting burden for eligible clinicians and alleviating
duplicative and/or conflicting payment methodologies that could
potentially distract eligible clinicians from the goals and objectives
they agreed to as an APM participant, or provide incentives that
conflict with those under the APM. We also acknowledge that some
stakeholders may find the APM scoring standard requirements confusing,
and we will continue to consider ways to further simplify the APM
scoring standard in future rulemaking. We believe much of this
confusion will be resolved through continued discussions with all of
our stakeholders, participants, and patients, through CMS's planned
technical assistance and education and outreach activities for the
Quality Payment Program, and through experience with this new program
in the first performance year. We also note that the finalized QP
Performance Period, described in section II.F.5. of this final rule
with comment period, modifies the proposed QP determination timeframe
so that eligible clinicians who are QPs for a year will not need to
report MIPS data. However, an eligible clinician that is in an Advanced
APM but does not meet the QP threshold will still be subject to MIPS.
Furthermore, eligible clinicians who are participants in a MIPS APM
that is not an Advanced APM cannot be QPs and thus will be
[[Page 77248]]
subject to MIPS under the APM scoring standard.
Comment: A commenter recommended that CMS not reward low-value
care. The commenter indicated that by reducing the cost performance
category to zero and reducing the weight for the quality performance
category to zero for MIPS APMs other than the Shared Savings Program
and Next Generation ACO Model, CMS may allow such MIPS APMs to perform
poorly on measures of efficiency and quality at the expense of other
clinicians who are truly delivering high-value care. The commenter
suggested that CMS either measure all MIPS eligible clinicians in the
same way, or allow MIPS APM participants to elect a neutral score for
the quality and cost MIPS performance categories.
Response: We do not believe the APM scoring standard rewards low-
value care, but rather that it provides MIPS eligible clinicians in
MIPS APMs a way to meet the requirements of the MIPS while focusing on
the goals of the APM to improve quality and lower the cost of care. The
terms and conditions of MIPS APMs themselves hold participants
accountable for the cost and quality of care. In accordance with the
statute, only Partial QPs have the option whether to report and be
subject to a MIPS payment adjustment for a year, as described in
section II.F.5. of this final rule with comment period. All MIPS
eligible clinicians, including those subject to the APM scoring
standard, will continue to receive final scores and MIPS payment
adjustments.
Comment: A commenter indicated the creation of the APM scoring
standard provides a large advantage to MIPS APM participants,
disadvantaging other MIPS eligible clinicians.
Response: We acknowledge that eligible clinicians in MIPS APMs may
achieve high scores in some MIPS performance categories. In some
categories such as improvement activities, the statute encourages and
credits participation in an APM. In others, MIPS eligible clinicians
may perform well because of the requirements they meet by virtue of
participating in MIPS APMs. However, we believe all MIPS eligible
clinicians have the opportunity to score highly, and as such we do not
believe the APM scoring standard will necessarily disadvantage other
MIPS eligible clinicians. We believe MIPS eligible clinicians under the
APM scoring standard have the potential to receive high MIPS payment
adjustments because they successfully perform the requisite activities,
not simply because they participate in an APM.
Comment: One commenter recommended CMS ensure that the APM scoring
standard actually reduces administrative burden in order to allow MIPS
APM participants to focus on APM efforts.
Response: We believe this final rule with comment period addresses
many of the concerns expressed by commenters about the MIPS reporting
burden for MIPS APM participants and we will continue to work to
identify ways to ensure APMs and their participants can focus their
efforts to achieve the care transformation goals of the APM.
Comment: Several commenters expressed support for the APM scoring
standard as proposed and applauded CMS for its efforts to reduce
reporting burden and allow MIPS APM participants to focus on the aims
of those APMs without misaligning incentives or having redundant or
conflicting requirements across programs. One commenter stated they
supported the proposed APM scoring standard, but thought CMS should
offer sufficient education and outreach to clinicians so they
understand it, as it adds complexity to the program. Two commenters
requested that CMS develop a flexible scoring methodology for MIPS APMs
that would recognize the significant investments to transform
healthcare made by APM participants. One commenter requested that the
APM scoring standard incorporate all MIPS eligible clinicians in large
multispecialty groups that may have some but not all MIPS eligible
clinicians participating in MIPS APMs. Another commenter recommended
that the APM scoring standard be retained in the future, allowing APM
decisions to be made with clarity, while another commenter supported
the APM scoring standard generally but thought it should be optional.
Response: We appreciate the general support for the proposed APM
scoring standard. We will continue to consider future refinements to
the APM scoring standard to ensure we are supporting eligible
clinicians in their efforts to transform health care and participate in
new payment and care delivery models. Although we understand that some
organizations may have some members of their practices in APMs and
others not in APMs, we do not believe that the APM scoring standards
should apply more broadly than the identified group of actual
participants in MIPS APMs, that is, the eligible clinicians included on
an APM Entity's Participation List.
Comment: A few commenters disagreed with our statements in the
proposed rule suggesting that APMs focused on hospitals do not have any
MIPS eligible clinicians as participants, stating that surgeons will be
involved in hip and knee replacements under CJR and CJR quality
performance measures should count for them for purposes of MIPS.
Another commenter stated that the MIPS APM criteria should be broader
to include the BPCI Initiative, CJR, and other episode payment models.
A few commenters stated that such APMs have been successful at reducing
costs and improving quality and that not including them as MIPS APMs
discourages clinicians from participation. A few commenters suggested
that CMS should amend facility-based APMs to require Participation
Lists. One commenter suggested that the APM scoring standard
requirement that a MIPS APM must require APM Entities to include at
least one eligible clinician on a Participation List should be delayed
until more MIPS APMs are available. A few commenters suggested the
criteria for a MIPS APM be expanded to include other APMs such as those
APMs that have an agreement with another payer outside the Medicare
program or those that have a CMS agreement to participate in an APM
through another entity such as a convener. One commenter expressed
concern that by not including all APMs as MIPS APMs some APM
participants will be forced to report twice on quality.
Response: An APM that is hospital-based may be a MIPS APM if it
meets all of the MIPS APM criteria, including the criterion that the
APM must require APM Entities to include at least one MIPS eligible
clinician on a Participation List. If this criterion is not met, the
APM is not a MIPS APM and the APM scoring standard does not apply.
Particularly relevant to facility- or hospital-based APMs (because
some do not require APM Entities to maintain Participation Lists), any
MIPS eligible clinicians that do not qualify as QPs or Partial QPs, and
are not included on a Participation List of an APM Entity that
participates in the MIPS APM, would report to MIPS and be scored
according to the generally applicable MIPS requirements for an
individual or group. The APM scoring standard is intended to ensure
that the MIPS eligible clinicians that are directly and collectively
accountable for beneficiary attribution and quality and cost/
utilization performance under the MIPS APM are able to focus their
efforts on the care transformation objectives of the APM rather than on
potentially duplicative reporting of measures. We note that the MIPS
eligible clinicians that are subject to the APM scoring
[[Page 77249]]
standard are not necessarily the same as the eligible clinicians who
could become QPs via participation in Advanced APMs, as described in
section II.F.5. of this final rule with comment period. For instance,
in certain circumstances, Affiliated Practitioners could become QPs,
but because the Advanced APM does not base payment incentives for these
eligible clinicians (either at the APM Entity or the eligible clinician
level) on their performance on cost/utilization and quality measures we
do not consider the APM requirements to be sufficiently related to MIPS
reporting requirements such that the APM scoring standard should be
applied. In other words, the QP determination for the APM incentive and
the MIPS performance categories measure different aspects of
performance that align differently with the roles of affiliated
practitioners. The QP determination depends on the level of payments or
patients furnished services through an Advanced APM. In contrast, MIPS
payment adjustments depend on an assessment of performance on cost and
quality in four categories. Whereas affiliated practitioners may
furnish services through an Advanced APM, contributing to collective
achievement under the APM, the QP threshold, in and of itself, does not
assess or directly incentivize their performance based on cost and
quality. Therefore, we do not believe there is the same potential for
overlapping requirements under MIPS and APMs for such MIPS eligible
clinicians. Under certain Advanced APMs such as CJR, Affiliated
Practitioners may be the primary eligible clinicians receiving payment
through the Advanced APM, but cost and quality measurement and
reporting under the Advanced APM are the responsibility of
participating hospitals rather than eligible clinicians. As such, there
is minimal potential for overlap between requirements under MIPS and
the APM for these MIPS eligible clinicians.
We agree with commenters that we should continue to consider
whether there are opportunities for additional APMs, including existing
episode payment models, to become MIPS APMs. As we work toward that
goal we believe we should move forward with the policy to avoid
potentially duplicative or conflicting reporting or incentives for MIPS
eligible clinicians participating in APMs that currently meet the MIPS
APM criteria. In the future, we may consider amending existing APMs to
meet MIPS APM criteria. However, as stated in the previous response, we
do not believe that application of the APM scoring standard should be
expanded to include MIPS eligible clinicians such as Affiliated
Practitioners whose roles are not directly linked to quality and cost/
utilization measures under the APM, or that the MIPS APM criteria
should be expanded to include APMs that do not tie payment incentives
to performance on quality and cost/utilization measures or APMs (such
as CJR) that do not require APM Entities to have at least one eligible
clinician on a Participation List. In these instances, we do not
believe the requirements of the APM are sufficiently connected to MIPS
reporting requirements and scoring such that there is significant
potential for duplicative reporting or conflicting incentives between
the APM and MIPS, the avoidance of which is the underlying purpose of
the APM scoring standard.
Comment: Two commenters requested that CMS clarify that the MIPS
APM payment adjustments resulting from the MIPS APM scoring standard
will not be included in the Shared Savings Program and Next Generation
ACO Model expenditures for benchmark calculations.
Response: MIPS payment adjustments resulting from the APM scoring
standard are the same as MIPS adjustments for all other MIPS eligible
clinicians. There are no unique ``MIPS APM payment adjustments.''
Rather, the APM scoring standard is only a particular scoring
methodology for deriving a final score that results in a MIPS payment
adjustment for an eligible clinician. Each APM has its own benchmarking
methodology--benchmarking is not necessarily standard across APMs.
Making a single determination with respect to the use of MIPS payment
adjustments in APM benchmarking is outside the scope of this final rule
with comment period.
Comment: One commenter suggested that CMS create an ``Other Payer
MIPS APM'' category.
Response: We appreciate the idea of allowing MIPS scoring to be
affected by participation in certain payment arrangements with other
payers and we may consider the feasibility of doing so in the future in
concert with the introduction of the All-Payer Combination Option.
After considering these comments, we are finalizing the criteria
for an APM to be a MIPS APM as proposed with one modification to the
first criterion in order to encompass APMs with terms defined through
law or regulation. MIPS APMs are APMs that meet the following criteria:
(1) APM Entities participate in the APM under an agreement with CMS or
by law or regulation; (2) the APM requires that APM Entities include at
least one MIPS eligible clinician on a Participation List; and (3) the
APM bases payment incentives on performance (either at the APM Entity
or eligible clinician level) on cost/utilization and quality measures.
Below we describe in detail how MIPS APM participants will be
identified from an APM Participation List to be included in the APM
Entity group under the APM scoring standard.
We are also finalizing the proposal that the APM scoring standard
does not apply to MIPS eligible clinicians who are not on a
Participation List for an APM Entity group in a MIPS APM. MIPS eligible
clinicians who are not part of the APM Entity group to which the APM
scoring standard applies may choose to report to MIPS as individuals or
groups according to the generally applicable MIPS rules.
(2) APM Scoring Standard Performance Period
We proposed that the performance period for MIPS eligible
clinicians participating in MIPS APMs would match the generally
applicable performance period for MIPS proposed in section II.E.4. of
the proposed rule. We proposed this policy would apply to all MIPS
eligible clinicians participating in MIPS APMs (those that meet the
criteria specified in section II.E.5.h.1. of the proposed rule) except
in the case of a new MIPS APM for which the first APM performance
period begins after the start of the corresponding MIPS performance
period. In this instance, the participating MIPS eligible clinicians in
the new MIPS APM would submit data to MIPS in the first MIPS
performance period for the APM either as individual MIPS eligible
clinicians or as a group using one of the MIPS data submission
mechanisms for all four performance categories, and report to us using
the APM scoring standard for subsequent MIPS performance period(s).
Additionally, we anticipate that there might be MIPS APMs that would
not be able to use the APM scoring standard (even though they met the
criteria for the APM scoring standard and were treated as a MIPS APMs
in the prior MIPS performance period) in their last year of operation
because of technical or resource issues. For example, a MIPS APM in its
final year may end earlier than the end of the MIPS performance period
(proposed to be December 31). We might not have continuing resources
dedicated or available to continue to support the MIPS APM activities
under the APM scoring standard if the MIPS
[[Page 77250]]
APM ends during the MIPS performance period. Therefore, if we determine
it is not feasible for the MIPS eligible clinicians participating in
the APM Entity to report to MIPS using this APM scoring standard in an
APM's last year of operation, the MIPS eligible clinicians in the MIPS
APM would need to submit data to MIPS either as individual MIPS
eligible clinicians or as a group using one of the MIPS data submission
mechanisms for the applicable performance period. We proposed that the
eligible clinicians in the MIPS APM would be made aware of this
decision in advance of the relevant MIPS performance period.
The following is a summary of the comments we received regarding
our proposal that the APM scoring standard performance period will be
same as the MIPS performance period.
Comment: A few commenters recommended CMS maintain consistency
between the reporting period for MIPS and MIPS APMs to reduce
administrative burden, and a commenter supported the same 12-month
performance period for use by MIPS and APMs. One commenter requested a
90-day reporting period for 2017.
Response: We agree with the commenters that aligning the
performance periods reduces administrative burden. We will maintain the
12-month performance period for the APM scoring standard, but data
submitted for the advancing care information and, if necessary,
improvement activities performance categories will follow the generally
applicable MIPS data submission requirements regarding the number of
measures and activities required to be reported during the performance
period in order to receive a score for these performance categories.
The quality performance category data for MIPS APMs will be submitted
in accordance with the specific reporting requirements of the APM,
which for most MIPS APMs covers the same 12-month performance period
that will be used for the APM scoring standard.
Comment: Two commenters requested CMS provide guidance for eligible
clinicians in a MIPS APM that closes before the end of the performance
period.
Response: We will post the list of MIPS APMs prior to the first day
of the MIPS performance period for each year. If the APM would have
qualified as a MIPS APM but the APM is ending before the end of the
performance period, then the APM will not appear on this list. We will
notify participants in any such APMs in advance of the start of the
performance period if they will need to report to MIPS using the MIPS
individual or group reporting option.
We are finalizing the APM scoring standard performance period to
align with the MIPS performance period.
(3) How the APM Scoring Standard Differs From the Assessment of Groups
and Individual MIPS Eligible Clinicians Under MIPS
We believe that establishing an APM scoring standard under MIPS
will allow APM Entities and their participating eligible clinicians to
focus on the goals and objectives of the MIPS APM to improve quality
and lower costs of care while avoiding potentially conflicting
incentives and duplicative reporting that could occur as a result of
having to submit separate or additional data to MIPS. The APM scoring
standard we proposed is similar to group assessment under MIPS as
described in section II.E.3.d. of the proposed rule, but would differ
in one or more of the following ways: (1) Depending on the terms and
conditions of the MIPS APM, an APM Entity could be comprised of a sole
MIPS eligible clinician (for example, a physician practice with only
one eligible clinician could be considered an APM Entity); (2) the APM
Entity could include more than one unique TIN, as long as the MIPS
eligible clinicians are identified as participants in the APM by their
unique APM participant identifiers; (3) the composition of the APM
Entity group could include APM participant identifiers with TIN/NPI
combinations such that some MIPS eligible clinicians in a TIN are APM
participants and other MIPS eligible clinicians in that same TIN are
not APM participants. In contrast, assessment as a group under MIPS
requires a group to be comprised of at least two MIPS eligible
clinicians who have assigned their billing rights to a TIN. It also
requires that all MIPS eligible clinicians in the group use the same
TIN.
In addition to the APM Entity group composition being potentially
different than that of a group as generally defined under MIPS, we
proposed for the APM scoring standard that we would generate a MIPS
final score by aggregating all scores for MIPS eligible clinicians in
the APM Entity that is participating in the MIPS APM to the level of
the APM Entity. As we explained in the proposed rule, we believe that
aggregating the MIPS performance category scores at the level of the
APM Entity is more meaningful to, and appropriate for, these MIPS
eligible clinicians because they have elected to participate in a MIPS
APM and collectively focus on care transformation activities to improve
the quality of care.
Further, depending on the type of MIPS APM, we proposed that the
weights assigned to the MIPS performance categories under the APM
scoring standard for MIPS eligible clinicians who are participating in
a MIPS APM may be different from the performance category weights for
MIPs eligible clinicians not participating in a MIPS APM for the same
performance period. For example, we proposed that under the APM scoring
standard, the weight for the cost performance category will be zero and
that for certain MIPS APMs, the weight for the quality performance
category will be zero for the 2019 payment year. Where the weight for
the performance category is zero, neither the APM Entity nor the MIPS
eligible clinicians in the MIPS APM would need to report data in these
categories, and we would redistribute the weights for the quality and
cost performance categories to the improvement activities and advancing
care information performance categories to maintain a total weight of
100 percent.
To implement certain elements of the APM scoring standard, we need
to use the Shared Savings Program (section 1899 of the Act) and CMS
Innovation Center (section 1115A of the Act) authorities to waive
specific statutory provisions related to MIPS reporting and scoring.
Section 1899(f) of the Act authorizes waivers of title XVIII
requirements as may be necessary to carry out the Shared Savings
Program, and section 1115A(d)(1) of Act authorizes waivers of title
XVIII requirements as may be necessary solely for purposes of testing
models under section 1115A of the Act. For each section in which we
proposed scoring methodologies and waivers to enable the proposed
approaches, we described how the use of waivers is necessary under the
respective waiver authority standards. The underlying purpose of APMs
is for CMS to pay for care in ways that are unique from FFS payment and
to test new ways of measuring and assessing performance. If the data
submission requirements and associated adjustments under MIPS are not
aligned with APM-specific goals and incentives, the participants
receive conflicting messages from us on priorities, which could create
uncertainty and severely degrade our ability to evaluate the impact of
any particular APM on the overall cost and quality of care. Therefore,
we explained our belief that, for the reasons stated in section
II.E.5.h. of the proposed rule certain waivers are necessary for
testing and operating
[[Page 77251]]
APMs and for maintaining the integrity of our evaluation of those APMs.
In the proposed rule we noted that for at least the first
performance year, we do not anticipate that any APMs other than those
under sections 1115A or 1899 of the Act would meet the criteria to be
MIPS APMs. In the event that we do anticipate other types of APMs
(demonstrations under section 1866C of the Act or required by federal
law) will become MIPS APMs for a future year, we will address MIPS
scoring for eligible clinicians in those APMs in future rulemaking.
The following is a summary of the comments we received regarding
our proposals to use the Shared Savings Program (section 1899 of the
Act) and CMS Innovation Center (section 1115A of the Act) authorities
to waive specific statutory provisions related to MIPS reporting and
scoring to implement the APM Scoring Standard for MIPS APMs and to
apply the MIPS final score at the APM Entity level.
Comment: A few commenters expressed support for CMS' use of waiver
authorities to establish the APM scoring standard. Several commenters
also supported the proposal to calculate the final score at the APM
Entity level. One commenter supported averaging scores for all
clinicians in a MIPS APM Entity for purposes of the MIPS payment
adjustment. A few commenters had concerns about aggregating all data
for the clinicians linked to an APM Entity, and one commenter
recommended that the APM scoring standard be optional.
Response: We continue to believe the final score derived at the APM
Entity level should be the score used for purposes of determining the
MIPS payment adjustment for each MIPS eligible clinician in that APM
Entity group. As part of their participation in any MIPS APM, eligible
clinicians should be working collaboratively and advancing shared care
goals for aligned patients. We believe this collaboration toward shared
goals under the MIPS APM differentiates these MIPS eligible clinicians
from those in a MIPS group defined by a billing TIN, and supports our
proposal to score these clinicians as a group.
The APM Entity final score is derived by aggregating the scores for
each of the performance categories as applicable. For example, if the
CPC+ model is determined to be a MIPS APM, participating MIPS eligible
clinicians in CPC+ will not be evaluated in the cost and quality
performance categories, which will have a zero weight for the first
performance year. In this example, the final score will be calculated
for MIPS eligible clinicians at the APM Entity level by adding the
weighted advancing care information score and the assigned improvement
activities score for the MIPS APM (see below for the final policies on
the scoring for these performance categories). This same final score
calculated at the APM Entity level will be applied to each MIPS
eligible clinician TIN/NPI combination in the APM Entity as identified
on the APM Entity's Participation List.
Comment: A commenter requested clarification on how reporting will
be accomplished with groups where MIPS eligible clinicians participate
in multiple APMs, especially multiple Advanced APMs.
Response: As finalized in section II.E.6. of this final rule with
comment period, if a single TIN/NPI combination for a MIPS eligible
clinician is in two or more MIPS APMs, we will use the highest final
score to determine the MIPS payment adjustment for that MIPS eligible
clinician. MIPS adjustments apply to the TIN/NPI combination, so to the
extent that a MIPS eligible clinician (NPI) participates in multiple
MIPS APMs with different TINs, each of those TIN/NPI combinations would
be assessed separately under each respective APM Entity.
We are finalizing the proposal to use the Shared Savings Program
and CMS Innovation Center authorities under sections 1899 and 1115A of
the Act, respectively, to waive specific statutory requirements related
to MIPS reporting and scoring in order to implement the APM scoring
standard. We note that although we proposed to use our authority under
section 1899(f) of the Act to waive these statutory requirements in
order to implement the APM scoring standard for MIPS eligible
clinicians participating in Shared Savings Program ACOs, we believe we
could also use our authority under section 1899(b)(3)(D) of the Act to
accomplish this result. Section 1899(b)(3)(D) of the Act allows us to
incorporate reporting requirements under section 1848 of the Act into
the reporting requirements for the Shared Savings Program, as we
determine appropriate, and to use alternative criteria than would
otherwise apply. Thus, we believe that section 1899(b)(3)(D) of the Act
also provides authority to apply the APM scoring standard for MIPS
eligible clinicians participating in a Shared Savings Program ACO
rather than requiring these MIPS eligible clinicians to report
individually or as a group using one of the MIPS data submission
mechanisms.
We are also finalizing our proposal to score MIPS eligible
clinicians in the MIPS APM at the APM Entity level. The final score
calculated at the APM Entity level will be applied to each MIPS
eligible clinician in the APM Entity group.
(4) APM Participant Identifier and Participant Database
To ensure we have accurately captured performance data for all of
the MIPS eligible clinicians that are participating in an APM, we
proposed to establish and maintain an APM participant database that
would include all of the MIPS eligible clinicians who are part of the
APM Entity. We would establish this database to track participation in
all APMs, in addition to specifically tracking participation in MIPS
APMs and Advanced APMs. We proposed that each APM Entity be identified
in the MIPS program by a unique APM Entity identifier, and we also
proposed that the unique APM participant identifier for a MIPS eligible
clinician would be a combination of four identifiers including: (1) APM
identifier established by CMS (for example, AA); (2) APM Entity
identifier established by CMS (for example, A1234); (3) the eligible
clinician's billing TIN (for example, 123456789); and (4) NPI (for
example, 1111111111). The use of the APM participant identifier will
allow us to identify all MIPS eligible clinicians participating in an
APM Entity, including instances in which the MIPS eligible clinicians
use a billing TIN that is shared with MIPS eligible clinicians who are
not participating in the APM Entity. In the proposed rule, we stated
that we would plan to communicate to each APM Entity the MIPS eligible
clinicians who are included in the APM Entity group in advance of the
applicable MIPS data submission deadline for the MIPS performance
period.
Under the Shared Savings Program, each ACO is formed by a
collection of Medicare-enrolled TINs (ACO participants). Under our
regulation at 42 CFR 425.118, all Medicare enrolled individuals and
entities that have reassigned their rights to receive Medicare payment
to the TIN of the ACO participant must agree to participate in the ACO
and comply with the requirements of the Shared Savings Program. Because
all providers and suppliers that bill through the TIN of an ACO
participant are required to agree to participate in the ACO, all MIPS
eligible clinicians that bill through the TIN of an ACO participant are
considered to be participating in the ACO. For purposes of the APM
scoring standard, the ACO
[[Page 77252]]
would be the APM Entity. The Shared Savings Program has established
criteria for determining the list of eligible clinicians participating
under the ACO, and we would use the same criteria for determining the
list of MIPS eligible clinicians included in the APM Entity group for
purposes of the APM scoring standard.
We recognize that there may be scenarios in which MIPS eligible
clinicians may change TINs, use more than one TIN for billing Medicare,
change their APM participation status, and/or change other practice
affiliations during a performance period. Therefore, we proposed that
only those MIPS eligible clinicians who are on the Participation List
for the APM Entity in a MIPS APM on December 31 (the last day of the
performance period) would be considered part of the APM Entity group
for purposes of the APM scoring standard. Consequently, MIPS eligible
clinicians who are not listed as participants of an APM Entity in a
MIPS APM at the end of the performance period would need to submit data
to MIPS through one of the MIPS data submission mechanisms and would
have their performance assessed either as individual MIPS eligible
clinicians or as a group for all four MIPS performance categories. For
example, under the proposal, a MIPS eligible clinician who participates
in the APM Entity on January 1, 2017, and leaves the APM Entity on June
15, 2017, would need to submit data to MIPS using one of the MIPS data
submission mechanisms and would have their performance assessed either
as an individual MIPS eligible clinician or as part of a group. This
approach for defining the applicable group of MIPS eligible clinicians
was consistent with our proposal for identifying eligible clinician
groups for purposes of QP determinations outlined in section II.F.5.b.
of the proposed rule; the group of eligible clinicians we use for
purposes of a QP determination would be the same as that used for the
APM scoring standard. This would be an annual process for each MIPS
performance period.
The following is a summary of the comments we received regarding
our proposals to establish an APM participant identifier, a CMS
database to identify and track the APM participants, and the dates that
we will use to determine if an MIPS APM eligible clinician will be
included in the MIPS APM for purposes of MIPS reporting under the APM
scoring standard.
Comment: A commenter suggested CMS use the current CMS enrollment
infrastructure such as PECOS to identity and track APM participants to
provide an incentive for eligible clinicians to update their Medicare
enrollment information, which in turn would provide CMS with more
accurate data on the MIPS eligible clinicians that are in a MIPS APM.
Response: We will be using existing systems to the extent feasible
to ensure we have accurate data on MIPS eligible clinicians and APM
participants. Depending on the results of our assessment of available
data and systems, we may or may not include any particular system, such
as PECOS.
Comment: A number of commenters supported the use of an APM
participant identifier that includes the TIN and NPI for the MIPS APM
eligible clinicians and urged collaboration with vendors to build a
useful infrastructure. One commenter thought CMS should simplify this
APM participant identifier. Two commenters encouraged CMS to make the
APM participant identifiers available to stakeholders in real time via
an Application Program Interface (API). One commenter indicated the APM
participant identifier would add administrative complexity. Another
commenter encouraged CMS to make sure there is a consistent approach to
identifying both APM and MIPS participants.
Response: We believe the use of the APM participant identifier will
ensure we use accurate information regarding MIPS eligible clinicians
and their participation in APMs, and we believe that this will reduce
administrative complexity by reducing ambiguity. We appreciate the
suggestion to make the APM participant identifier available via an API,
and we are exploring a variety of methods to communicate this
information.
Comment: A few commenters were opposed to the December 31 date for
determining if the APM Entity participant would be included in the MIPS
APM for purposes of the APM scoring standard. A commenter did not
support this proposal because MIPS eligible clinicians could be
excluded if they were participating throughout the year but not on
December 31st. One commenter suggested that the eligible clinician
should be included in the group if they were in the MIPS APM for more
than half of the performance period and another commenter suggested
they be considered as participating in the group if they were in the
MIPS APM for 90 days. Yet another commenter stated that CMS's proposed
policy for determining who participates in a given APM does not
sufficiently respond to the often complex billing relationships
clinicians maintain across TINs, and that these complex billing
relationships are especially true for academic medical center
clinicians who often relocate due to changes in employment based on the
academic year. The commenter suggested having a more flexible list of
dates for updating the list of MIPS eligible clinicians participating
in a MIPS APM (and therefore subject to the APM scoring standard) or
looking at claims rather than Participation Lists.
Response: We agree with the commenters that only using the December
31 date to determine whether an eligible clinician is a MIPS APM
participant could potentially impact a clinician's decision on whether
or when to leave a MIPS APM and their ability to report to MIPS if they
leave the MIPS APM prior to the end of the performance period. We also
recognize that an eligible clinician who participates in a MIPS APM in
the first 6 months of the performance period and then leaves the MIPS
APM may have difficulty reporting to MIPS independent of the APM
Entity. If the MIPS eligible clinician leaves the MIPS APM and joins a
group or another APM that is not a MIPS APM, the individual would
likely be included in the new group's MIPS reporting. But if the MIPS
eligible clinician does not join another group, then they would need to
report to MIPS as an individual. In such a case, the MIPS eligible
clinician may not be able to meet one or more of the MIPS performance
category reporting requirements. For example, a MIPS eligible clinician
who used CEHRT in an APM Entity through July of a performance period
may not have the CEHRT available to report the advancing care
information performance category as an individual MIPS eligible
clinician during the MIPS submission period. We are revising the points
in time at which we will assess whether a MIPS eligible clinician is on
a Participation List for purposes of the APM scoring standard. We will
review the Participation Lists for MIPS APMs on March 31, June 30, and
August 31. A MIPS eligible clinician on the Participation List for an
APM Entity in a MIPS APM on at least one of these three dates will be
included in the APM Entity group for the purpose of the APM scoring
standard. For example, if the Oncology Care Model (OCM) is determined
to be a MIPS APM, a MIPS eligible clinician who is identified on the
Participation List of an APM Entity participating in OCM from January 1
through April 25 of the performance year would be included in the APM
Entity group for purposes of the APM
[[Page 77253]]
scoring standard for that performance year.
Comment: A commenter requested clarification on whether a MIPS
eligible clinician who participates in a MIPS APM for part of the year
but leaves prior to the end of the performance period is allowed to
submit a partial year of MIPS data for the time they were not in the
MIPS APM.
Response: As discussed in section II.F.5. of this final rule with
comment period, we are adopting a modified version of the proposed
policy for defining the APM Entity group, which will be applicable to
both QP determinations and the APM scoring standard. Under the final
policy, if a MIPS eligible clinician is on the APM Participation List
on at least one of the APM participation assessment (Participation List
``snapshot'') dates, the MIPS eligible clinician will be included in
the APM Entity group for purposes of the APM scoring standard for the
applicable performance year. If the MIPS eligible clinician is not on
the APM Entity's Participation List on at least one of the snapshots
dates (March 31, June 30, or August 31), then the MIPS eligible
clinician will need to submit data to MIPS using the MIPS individual or
group reporting option and adhere to all generally applicable MIPS data
submission requirements to avoid a negative payment adjustment.
Therefore, if the applicable data submission requirements include full-
year reporting, the MIPS individual or group would need to report for
the full year.
Comment: A commenter recommended that CMS: (1) Allow ACOs to report
quality data and other information for MIPS on behalf of participating
clinicians who join an ACO mid-performance year but are not included on
the ACO Participation List until the following year, and (2) hold
harmless from negative MIPS payment adjustments those clinicians who
join the ACO mid-performance year but are not included on the ACO
Participation List until the following year. Another commenter
requested that MIPS APM participants who leave prior to the end of the
performance period be exempt from MIPS reporting because this may
hinder employment mobility. Some commenters suggested CMS indemnify
clinicians who joined an ACO mid-year from any negative MIPS payment
adjustments because the commenters believe these clinicians should not
be penalized for the hard work they put into the APM during the year
solely because they joined the APM Entity after the start of the
performance year.
Response: Each APM has specific rules as to when participants can
be added or removed from Participation Lists. If the MIPS eligible
clinician is on the MIPS APM Participation List on at least one of the
three snapshot dates (March 31, June 30, or August 31), then the MIPS
eligible clinician will be included in the APM Entity group and scored
according to the APM scoring standard for purposes of MIPS for that
performance year. Once an eligible clinician is determined to be part
of the APM Entity group in a MIPS APM at one of the snapshot dates, the
eligible clinician will be part of the group for purposes of MIPS and
the APM scoring standard for that performance period even if they leave
the APM Entity at a later date.
Comment: A commenter requested clarification about whether the APM
Entities will submit new Participation Lists for the purpose of MIPS or
if CMS will use Participation Lists submitted for the MIPS APM. One
commenter indicated it may be easier if the APM Entity provides CMS
with the list of MIPS APM participants. Another commenter suggested
that instead of using a Participation List CMS should design other
approaches to discern which eligible clinicians are in an APM Entity.
Response: We will use the Participation Lists that the APM Entity
provides to us in accordance with the particular MIPS APM's rules. Each
APM has particular rules for how the Participation Lists may be updated
during a performance year to reflect the APM Entities and their
participating eligible clinicians, as identified by their TIN/NPI
combinations. We will maintain these Participation Lists for each APM
in a dedicated database, and we will use the same Participation Lists
for operational purposes within the APM, for QP determinations, and to
determine which MIPS eligible clinicians are in the APM Entity group
for purposes of the APM scoring standard. Therefore, APM Entities such
as ACOs would not be required to submit any additional Participant
Lists for purposes of the Quality Payment Program.
Comment: A commenter requested CMS provide clear guidance as to how
each eligible clinician would be scored if they are a QP in a MIPS APM
so they can make informed decisions regarding APM participation.
Response: An eligible clinician who becomes a QP is exempt from
MIPS reporting requirements and the payment adjustment for the
applicable payment year. For example, if the eligible clinician is
determined to be a QP for the 2019 payment year based on 2017
performance, then the clinician is exempt from a MIPS payment
adjustment in 2019 and does not need to report data to MIPS data for
the 2017 performance period.
We are finalizing the use of the proposed APM participant
identifier to define the APM Entity group that is participating in a
MIPS APM. The APM Participation List information will be stored in a
database so that, among other uses, we can identify and include the
appropriate MIPS eligible clinicians in an APM Entity group to which
the APM scoring standard applies. We are revising our proposal to use
December 31 as the date on which an eligible clinician must appear on
the Participation List to be included in the APM Entity group for a
MIPS APM. Instead of identifying MIPS eligible clinicians participating
in a MIPS APM at a single point in time on December 31 of the
performance year, we will review the MIPS APM Participation Lists on
March 31, June 30 and August 31. All eligible clinicians who appear on
an APM Entity's list for a MIPS APM on at least one of those three
dates will be included in the APM Entity group for purposes of the APM
scoring standard for the year. We describe the determination of the APM
Entity group in full detail in section II.F.5. of this final rule with
comment period.
(5) APM Entity Group Scoring for the MIPS Performance Categories
As mentioned previously, section 1848(q)(3)(A) of the Act requires
the Secretary to establish performance standards for the measures and
activities under the following performance categories: (1) Quality; (2)
cost; (3) improvement activities; and (4) advancing care information.
We proposed at Sec. 414.1370 to calculate one final score that is
applied to the billing TIN/NPI combination of each MIPS eligible
clinician in the APM Entity group. Therefore, each APM Entity group
(for example, the MIPS eligible clinicians in a Shared Savings Program
ACO or an Oncology Care Model practice) would receive a score for each
of the four performance categories according to the proposals described
in the proposed rule, and we would calculate one final score for the
APM Entity group. The APM Entity group score would be applied to each
MIPS eligible clinician in the group, and subsequently used to develop
the MIPS payment adjustment that is applicable to each MIPS eligible
clinician in the group. Thus, the final score for the APM Entity group
and the participating MIPS eligible clinician score are the same. For
[[Page 77254]]
example, in the Shared Savings Program, the MIPS eligible clinicians in
each ACO would be an APM Entity group. That group would receive a
single final score that would be applied to each of its participating
MIPS eligible clinicians. Similarly, in the OCM, the MIPS eligible
clinicians identified on an APM Entity's Participation List would
comprise an APM Entity group. That group would receive a single final
score that would be applied to each of the MIPS eligible clinicians in
the group. We note that this APM Entity group final score is not used
to evaluate eligible clinicians or the APM Entity for purposes of
incentives within the APM, shared savings payments, or other potential
payments under the APM, and we currently do not foresee APMs using the
final score for purposes of evaluation within the APM. Rather, the APM
Entity group final score would be used only for the purposes of the APM
scoring standard under MIPS. It should be noted that although we
proposed that the APM scoring standard would only apply to participants
in MIPS APMs, MIPS eligible clinicians that participate in an APM
(including but not limited to a MIPS APM) and submit either individual
or group level data to MIPS earn a minimum score of 50 percent of the
highest potential improvement activities performance category score as
long as such MIPS eligible clinicians are on a list of participants for
an APM and are identifiable by the APM participant identifier.
We explained in the proposed rule that we want to avoid situations
in which different MIPS eligible clinicians in the same APM Entity
group receive different MIPS scores. APM Entities have a goal of
collective success under the terms of the APM, so having a variety of
differing MIPS adjustments for eligible clinicians within that
collective unit would undermine the intent behind the APM to test a
departure from a purely FFS system based on independent clinician
activity.
We proposed, for the first MIPS performance period, a specific
scoring and reporting approach for the MIPS eligible clinicians
participating in MIPS APMs, which would include the Shared Savings
Program, the Next Generation ACO Model, and other APMs that meet the
proposed criteria for a MIPS APM. In the proposed rule, we described
the APM Entity data submission requirements and proposed a scoring
approach for each of the MIPS performance categories for specific MIPS
APMs (the Shared Savings Program, Next Generation ACO Model, and all
other MIPS APMs).
The following is a summary of the comments we received regarding
our proposal to calculate one final score per APM Entity group in a
MIPS APM, and to apply that final score to each MIPS eligible clinician
(identified by the billing TIN/NPI combination) in the APM Entity group
and our proposal to give one-half of the maximum improvement activities
score to any MIPS eligible clinicians who are on a list of participants
and identified by the APM participant identifier, regardless of whether
they participate in an Advanced APM, MIPS APM, or other APM.
Comment: A number of commenters supported our proposal. Another
commenter was concerned that in a group, poor performance by some
eligible clinicians may affect the final score for other eligible
clinicians who perform better. A commenter suggested that CMS allow APM
participants to receive the MIPS score that is the higher of the APM
Entity group score and the group TIN score.
Response: As previously discussed, we are finalizing MIPS APM
scoring at the APM Entity level, and the final score will be applied to
each TIN/NPI combination in the APM Entity group. In any group
reporting structure, the resulting final score reflects the collective
performance of the group. Unless all APM Entity group members score
exactly equally, some will receive higher or lower final scores than
they would have achieved individually. We believe that, although some
group members' lower final scores may offset the final score for higher
performers in the APM Entity, the APM Entity level score appropriately
reflects the aggregate performance of the eligible clinicians in the
APM Entity. APMs are premised on a group of MIPS eligible clinicians
working together to collectively achieve the goals of the APM, and
providing different MIPS payment adjustments within an APM Entity is
not consistent with those goals.
Under specific circumstances, described below, in which a Shared
Savings Program ACO fails to report quality under the Shared Savings
Program requirements, participant TINs of such ACOs would be considered
the APM Entity groups for purposes of the APM scoring standard. Even
under this exception, those TIN groups would still be scored as a
cohesive unit, with no individual final score variation within the TIN.
Comment: A commenter supported allowing participants in other APMs,
such as the Accountable Health Communities Model, to receive
improvement activities credit. A few commenters requested that CMS
clarify how eligible clinicians and groups participating in APMs that
are not MIPS APMs would receive credit for APM participation in the
improvement activities category.
Response: MIPS eligible clinicians that participate in an APM that
is not a MIPS APM will need to be identified by their APM participant
identifier on a CMS-maintained list during the MIPS performance year in
order to receive one-half of the maximum improvement activities score
for APM participation. This list may be a Participation List, an
Affiliated Practitioner List, or another CMS-maintained list, as
applicable. Such CMS-maintained lists define APM participation;
therefore, MIPS eligible clinicians are not considered to be
participating in an APM unless included on a CMS-maintained list. We
will notify APM Entities in advance of the first day of the performance
period if the APM utilizes such a list. If the specific APM does
utilize such a list, then the MIPS eligible clinicians will be eligible
for the improvement activities credit.
Comment: A commenter requested that CMS clarify in the final rule
with comment period that a rheumatologist participating in other APMs
not listed as an Advanced or MIPS APM in this rule would receive one-
half of the maximum improvement activities score for such
participation.
Response: As stated above, an eligible clinician that participates
in an APM, even one that is not an Advanced APM or MIPS APM, would
still receive one-half the maximum score for improvement activities
through APM participation. CMS defines participation in APMs by
presence on a CMS-maintained list associated with an APM. Therefore, we
will use those lists to validate the APM participation improvement
activities credit.
Comment: A number of commenters supported scoring MIPS eligible
clinicians at the APM Entity level, and other commenters supported
scoring MIPS eligible clinicians at the TIN level. A commenter stated
that evaluating APM Entities, such as ACOs, at the APM Entity level
reinforces the APM Entity purpose and avoids fractures within the APM
Entity. Another commenter recommended CMS have all ACOs scored at the
APM Entity level for the advancing care information performance
category to recognize that the health information technology work in
most APMs is best measured as a whole. A few commenters requested that
the APM participants have a choice as to being scored at the APM Entity
level or participant TIN level. A
[[Page 77255]]
commenter further suggested that scoring at the APM Entity level
instead of the participant TIN level overstates the relationship
between these clinicians. One commenter stated that the policies in
which the primary TIN for an ACO reports the primary-care focused CMS
Web Interface measures result in a double standard whereby specialists
in ACOs are not held to the same individual level of accountability as
those in small group or solo practices where reporting is done at the
individual clinician level.
Response: We believe that APM Entities should be scored at the APM
Entity level because the APM Entity is a group of eligible clinicians
focused on achieving the collective goals of the APM, which include
shared responsibility for cost and quality. That stated, we
specifically recognize that there may be rare instances in which an ACO
in the Shared Savings Program may fail to report quality as required by
the Shared Savings Program, which would adversely impact the MIPS final
score of all MIPS eligible clinicians billing under ACO participant
TINs. Accordingly, in the event that a Shared Savings Program ACO does
not report quality measures as required by the Shared Savings Program,
scoring under the APM scoring standard would be calculated at the ACO
participant TIN level for MIPS eligible clinicians in that ACO, and
each of the ACO participant TINs would receive its own TIN-level final
score instead of an APM Entity-level final score. We note, however,
that our final policy would not cancel or mitigate any of the negative
consequences associated with non-reporting on quality as required under
the Shared Savings Program, including ineligibility for shared savings
payments and/or potential termination of the ACO from the program.
We are finalizing our proposal to calculate one final score at the
APM Entity level that will be applied to the billing TIN/NPI
combination of each MIPS eligible clinician in the APM Entity group. We
are also finalizing our policy to give one-half of the maximum
improvement activities score to eligible clinicians who are APM
participants, with the clarification that we would extend such
improvement activities scoring credit to any MIPS eligible clinicians
identified by an APM participant identifier on a Participation List, an
Affiliated Practitioners List, or other CMS-maintained list of
participants at any time during the MIPS performance period.
In the event that a Shared Savings Program ACO does not report
quality measures as required under the Shared Savings Program
regulations, then scoring on all MIPS performance categories will be at
the ACO participant TIN level, and the resulting TIN-level final score
will be applied to each of its constituent TIN/NPI combinations. For
purposes of both the Shared Savings Program quality performance
requirement and the APM scoring standard, any ``partial'' reporting of
quality measures through the CMS Web Interface that does not satisfy
the quality reporting requirements under the Shared Savings Program
will be considered a failure to report. We note that in this scenario,
each ACO participant TIN would need to report quality data to MIPS
according to MIPS group reporting requirements in order to avoid a
score of zero for the quality performance category.
We believe that this exception for the Shared Savings Program
recognizes the recommendations of several commenters that the APM
scoring standard should apply at the TIN level and concerns that in
some cases ACOs are not representative of the potentially widely-
varying MIPS performance across ACO participant TINs. Although we
maintain that the APM Entity-level scoring is generally appropriate to
reflect the collective goals and responsibilities of the group, we
believe that ACOs that fail to report quality as required under the
Shared Savings Program do not necessarily represent the quality
performance of their constituent TINs. Therefore, we believe it is
appropriate in such cases to allow ACO participant TINs to avoid a
score of zero in the quality performance category and to take
responsibility for their own MIPS reporting and scoring independent of
the ACO and other TINSs in the ACO. Further, this policy is generally
consistent with similar policies that have been proposed for ACO
participant TINs under PQRS and the Value Modifier program at (81 FR
46408-46409, 46426-46427).
Additionally, we recognize that there may be instances when an APM
Entity's participation in the APM is terminated during the MIPS
performance period. As we state in section II.F.5. of this final rule
with comment period, we will not make the first assessment to determine
whether a MIPS eligible clinician is on an APM Entity's Participation
List until March 31 of the performance period. Therefore if an APM
Entity group terminates its participation in the APM prior to March 31,
the MIPS eligible clinicians would not be considered part of an APM
Entity group for purposes of the APM scoring standard.
If an APM Entity's participation in the APM is terminated on or
after March 31 of a performance period, the MIPS eligible clinicians in
the APM Entity group would still be considered an APM Entity group in a
MIPS APM for the year, and would report and be scored under the APM
scoring standard.
(6) Shared Savings Program--Quality Performance Category Scoring Under
the APM Scoring Standard
We proposed that beginning with the first MIPS performance period
Shared Savings Program ACOs would only need to submit their quality
measures to CMS once using the CMS Web Interface through the same
process that they use to report to the Shared Savings Program to report
quality measures to MIPS. These data would be submitted once but used
for both the Shared Savings Program and for MIPS. Shared Savings
Program ACOs have used the CMS Web Interface for submitting their
quality measures since the program's inception, making this a familiar
data submission process. The Shared Savings Program quality measure
data reported to the CMS Web Interface would be used by CMS to
calculate the MIPS quality performance category score at the APM Entity
group level. The Shared Savings Program quality performance data that
is not submitted to the CMS Web Interface, for example the CAHPS survey
and claims-based measures, would not be included in the MIPS APM
quality performance category score. The MIPS quality performance
category requirements and performance benchmarks for quality measures
submitted via the CMS Web Interface would be used to determine the MIPS
quality performance category score at the ACO level for the APM Entity
group. We stated that we believe this would reduce the reporting burden
for Shared Savings Program MIPS eligible clinicians by requiring
quality measure data to be submitted only once and used for both
programs.
In the proposed rule, we explained that we believe that no waivers
are necessary to adopt this approach because the quality measures
submitted via the CMS Web Interface under the Shared Savings Program
are also MIPS quality measures and would be scored under MIPS
performance standards. In the event that Shared Savings Program quality
measures depart from MIPS measures in the future, we would address such
changes including whether further waivers are necessary at such a time
in future rulemaking.
The following is a summary of the comments we received regarding
our proposal to have Shared Savings Program ACOs report quality
measures to MIPS using the CMS Web Interface as
[[Page 77256]]
they normally would under Shared Savings Program rules and our proposal
to calculate the MIPS quality performance category score at the APM
Entity group level based on the data reported by the ACO to the CMS Web
Interface and using MIPS performance benchmarks.
Comment: A commenter wanted to know which set of APM scoring
standard rules would apply to CPC+ practices that participate in both
CPC+ and the Shared Savings Program. The commenter noted that if the
reporting and scoring under the APM scoring standard for other MIPS
APMs applies to the CPC+ practice, the quality performance category
would be reweighted to zero. The commenter recommended that MIPS
eligible clinicians who participate in both the CPC+ and the Shared
Savings Program use the Shared Savings Program rules for reporting and
scoring under the APM scoring standard.
Response: In May 2016, CMS announced that practices may participate
in both a CPC+ model and in an ACO participating in the Shared Savings
Program. More information about dual participation may be found in the
CPC+ FAQs or RFA at https://innovation.cms.gov/Files/x/cpcplus-practiceapplicationfaq.pdf or https://innovation.cms.gov/Files/x/cpcplus-rfa.pdf. For purposes of the APM scoring standard, MIPS
eligible clinicians in CPC+ practices that are also participating in a
Shared Savings Program ACO will be considered part of a Shared Savings
Program ACO. CPC+ practices that are part of a Shared Savings Program
ACO will report quality to CPC+ as required by the CPC+ model but will
not receive the CPC+ performance-based incentive payment. As part of a
Shared Savings Program ACO, CPC+ practices, along with the other ACO
participants, will be subject to the payment incentives for cost and
quality under the Shared Savings Program. Because CPC+ practices that
participate in both the CPC+ model and the Shared Savings Program are
not eligible to receive the performance-based incentive payment under
the CPC+ model, responsibility for cost and quality is assessed more
comprehensively under the Shared Savings Program. Therefore, we believe
that the Shared Savings Program participation of these ``dual
participants'' should determine the manner in which we assess them
under the APM scoring standard.
Comment: A commenter agreed with the proposed approach of not
including CAHPS or other non-CMS Web Interface quality data measures in
the MIPS APM quality performance category score for ACOs in the Shared
Savings Program. Alternately, a commenter recommended that CAHPS
measures be included in Shared Savings Program ACO quality performance
category scores.
Response: Because CAHPS survey responses are not submitted to the
CMS Web Interface and may not be available in time for inclusion in the
MIPS quality performance category scoring, we are not including these
measures in the MIPS quality performance category score for the ACOs in
the Shared Savings Program and the Next Generation ACO Model.
Comment: One commenter requested clarification as to which quality
measures, specifically whether MIPS population health measures, would
be included in the APM scoring standard for Shared Savings Program
ACOs.
Response: The MIPS population health measures will not be included
in the quality performance category score for eligible clinicians
participating in the Shared Savings Program, the Next Generation ACO
Model or other MIPS APMs under the APM scoring standard.
Comment: A commenter requested that CMS ensure that all the MIPS
eligible clinicians billing under the TIN of an ACO participant in a
Shared Savings Program ACO receive the APM Entity group final score
even though most ACO quality measures are for primary care physicians.
Response: All eligible clinicians that bill through the TIN of a
Shared Savings Program ACO participant and are included on the
Participant List on at least one of the three Participation List
snapshot dates will receive the APM Entity group final score.
Comment: A commenter requested that all ACOs be exempt from the
MIPS quality performance category because they are already being
assessed for quality under the APM and also requested that Shared
Savings Program Track 1 participants have the option to be exempt from
MIPS.
Response: All MIPS eligible clinicians participating in the Shared
Savings Program are subject to MIPS unless they are determined to be a
QP or a Partial QP whose APM Entity elects not to report under MIPS.
This includes MIPS eligible clinicians who are not participating in
Advanced APMs. Under the APM scoring standard, MIPS eligible clinicians
participating in Shared Savings Program ACOs do not have to do any
additional reporting to satisfy MIPS quality performance category
reporting requirements.
We are finalizing our proposal that a Shared Savings Program ACO's
quality data reported to the CMS Web Interface as required by Shared
Savings Program rules will also be used for purposes of scoring the
MIPS quality performance category using MIPS performance benchmarks. We
note that for purposes of the Shared Savings Program quality reporting
requirement and the APM scoring standard, any ``partial'' reporting of
quality measures through the CMS Web Interface that does not satisfy
the requirements under the Shared Savings Program will be considered a
failure to report, triggering the exception finalized above in which we
will separately assess each ACO participant TIN under the APM scoring
standard.
(7) Shared Savings Program--Cost Performance Category Scoring Under the
APM Scoring Standard
We proposed that for the first MIPS performance period, we would
not assess MIPS eligible clinicians participating in the Shared Savings
Program (the MIPS APM) under the cost performance category. We proposed
this approach because: (1) Eligible clinicians participating in the
Shared Savings Program are already subject to cost and utilization
performance assessments under the APM; (2) the Shared Savings Program
measures cost in terms of an objective, absolute total cost of care
expenditure benchmark for a population of attributed beneficiaries, and
participating ACOs may share savings and/or losses based on that
standard, whereas the MIPS cost measures are relative measures such
that clinicians are graded relative to their peers, and therefore
different than assessing total cost of care for a population of
attributed beneficiaries; and (3) the beneficiary attribution
methodologies for measuring cost under the Shared Savings Program and
MIPS differ, leading to an unpredictable degree of overlap (for
eligible clinicians and for us) between the sets of beneficiaries for
which eligible clinicians would be responsible that would vary based on
unique APM Entity characteristics such as which and how many TINs
comprise an ACO. We believe that with an APM Entity's finite resources
for engaging in efforts to improve quality and lower costs for a
specified beneficiary population, the population identified through an
APM must take priority to ensure that the goals and program evaluation
associated with the APM are as clear and free of confounding factors as
possible. The potential for different, conflicting results across
Shared Savings Program and MIPS assessments--due to the differences in
attribution, the inclusion in MIPS of episode-based measures that do
not
[[Page 77257]]
reflect the total cost of care, and the objective versus relative
assessment factors listed above--creates uncertainty for MIPS eligible
clinicians who are attempting to strategically transform their
respective practices and succeed under the terms of the Shared Savings
Program.
For example, Shared Savings Program ACOs are held accountable for
expenditure benchmarks that reflect the total Medicare Parts A and B
spending for their assigned beneficiaries, whereas many of the proposed
MIPS cost measures focus on spending for particular episodes of care or
clinical conditions. We consider it a programmatic necessity that the
Shared Savings Program has the ability to structure its own measurement
and payment for performance on total cost of care independent from
other incentive programs such as the cost performance category under
MIPS. Thus, we proposed to reduce the MIPS cost performance category
weight to zero for all MIPS eligible clinicians in APM Entities
participating in the Shared Savings Program.
Accordingly, under section 1899(f) of the Act, we proposed to
waive--for MIPS eligible clinicians participating in the Shared Savings
Program--the requirement under section 1848(q)(5)(E)(i)(II) of the Act
that specifies the scoring weight for the cost performance category.
With the proposed reduction of the cost performance category weight to
zero, we believed it would be unnecessary to specify and use cost
measures in determining the MIPS final score for these MIPS eligible
clinicians. Therefore, under section 1899(f) of the Act, we proposed to
waive--for MIPS eligible clinicians participating in the Shared Savings
Program--the requirements under sections 1848(q)(2)(B)(ii) and
1848(q)(2)(A)(ii) of the Act to specify and use, respectively, cost
measures in calculating the MIPS final score for such MIPS eligible
clinicians.
Given the proposal to waive requirements under section
1848(q)(5)(E)(i)(II) of the Act in order to reduce the weight of the
cost performance category to zero, we also needed to specify how that
weight would be redistributed among the remaining performance
categories in order to maintain a total weight of 100 percent. We
proposed to redistribute the cost performance category weight to both
the improvement activities and advancing care information performance
categories as specified in Table 11 of this final rule with comment
period. The MIPS cost performance category is proposed to have a weight
of 10 percent for the first performance period. Because the MIPS
quality performance category bears a relatively higher weight than the
other three MIPS performance categories, and in accordance with section
1848(q)(5)(E)(i)(I) and (II) of the Act, the weight for this category
will be reduced from 50 to 30 percent as of the 2021 MIPS payment
period, we proposed to evenly redistribute the 10 percent cost
performance category weight to the improvement activities and advancing
care information performance categories so that the distribution does
not change the relative weight of the quality performance category.
Because the MIPS quality performance category weight is required under
the statute to be reduced to 30 percent after the first 2 years of
MIPS, we believe that increasing the quality performance category
weight would be incongruous in light of the eventual balance of the
weights set forth in the statute. The redistributed cost performance
category weight of 10 percent would result in a 5 percentage point
increase (from 15 to 20 percent) for the improvement activities
performance category and a 5 percentage point increase (from 25 to 30
percent) for the advancing care information performance category. We
invited comments on the proposed weights and specifically whether we
should increase the MIPS quality performance category weight.
In the proposed rule we explained that as the MIPS cost performance
category evolves over time, there might be greater potential for
alignment and less potential duplication or conflict with MIPS cost
measurement for MIPS eligible clinicians participating in APMs such as
the Shared Savings Program. We will continue to monitor and consider
how we might incorporate an assessment in the MIPS cost performance
category into the APM scoring standard for MIPS eligible clinicians
participating in the Shared Savings Program. We also understand that
reducing the cost performance category weight to zero and
redistributing the weight to the improvement activities and advancing
care information performance categories could, to the extent that
improvement activities and advancing care information scores are higher
than the scores these MIPS eligible clinicians would have received
under the cost performance category, would result in higher final
scores on average for MIPS eligible clinicians participating in the
Shared Savings Program. We solicited comment on the possibility of
assigning a neutral score to the Shared Savings Program APM Entity
groups for the cost performance category to moderate MIPS final scores
for APM Entities participating in the Shared Savings Program. We also
generally solicited comment on our proposed policy, and on whether and
how we should incorporate the cost performance category into the APM
scoring standard under MIPS for eligible clinicians participating in
the Shared Savings Program for future years.
The following is a summary of the comments we received regarding
our proposal to reduce the MIPS cost performance category weight to
zero for APM Entity groups participating in the Shared Savings Program.
Comment: Several commenters supported our proposal not to assess
cost for MIPS APMs and our efforts to reduce duplicative measurement.
One commenter suggested we give a full score for the cost performance
category instead of redistributing the 10 percent weight to other MIPS
performance categories. A few commenters recommended the 10 percent
weight for the cost performance category be redistributed entirely to
the improvement activities performance category.
One commenter recommended that MIPS eligible clinicians in Shared
Savings Program ACOs receive extra credit in the cost performance
category if their ACO achieved expenditures below its benchmark. The
commenter suggested that CMS consider having a sliding scale of cost
category points awarded to MIPS eligible clinicians that participate in
Shared Savings Program ACOs with benchmarks of less than $10,000 per
beneficiary per year. One commenter proposed that CMS reward Shared
Savings Program ACOs that score at or above the average on cost
measures, and hold harmless Shared Savings Program ACOs scoring below
average. One commenter was opposed to reducing the cost performance
category weight to zero.
Response: We appreciate commenters' widespread support for this
proposal to reduce the weight of the MIPS cost performance category to
zero under the APM scoring standard for eligible clinicians
participating in the Shared Savings Program. While we will continue to
monitor and consider how we might in future years incorporate the MIPS
cost performance category into the APM scoring standard for eligible
clinicians participating in the Shared Savings Program, we believe that
assessment in this category would conflict with the assessment of the
financial performance of ACOs
[[Page 77258]]
participating in the Shared Savings Program at this time. Because ACOs
in the Shared Savings Program are assessed through particular
attribution and benchmarking methodologies for purposes of earning
shared savings payments, we believe that adding additional and separate
MIPS incentives around cost would be redundant, potentially confusing,
and could undermine the incentives built into the Shared Savings
Program.
We are finalizing our proposal to reduce the cost performance
category to zero percent for APM Entity groups in the Shared Savings
Program and to evenly redistribute the 10 percent cost performance
category weight to the improvement activities and advancing care
information performance categories. We note that this policy may seem
unnecessary given that the MIPS policy for the initial performance year
reduces the cost performance category weight to zero for all MIPS
eligible clinicians. However, the zero weight for the cost performance
category for APM Entity groups in the Shared Savings Program will
remain in place for subsequent years unless we modify it through future
notice and comment rulemaking, whereas the zero weight given to the
cost performance category under the generally applicable MIPS scoring
standard is limited to the first performance period, will increase to
10 percent in the second performance period, and will increase to 30
percent in the third performance period. We believe that setting this
foundation from the outset of the Quality Payment Program will
contribute to consistency and minimize uncertainty for MIPS APM
participants at least until such a time as we might identify a means to
consider performance in the MIPS cost performance category that is
congruent with cost evaluation under the Shared Savings Program.
We further note that although we proposed to use our authority
under section 1899(f) of the Act to waive the requirement under section
1848(q)(5)(E)(i)(II) of the Act to specify the scoring weight for the
cost performance category because it was necessary to waive this
requirement in order to ensure that the Shared Savings Program retains
the ability to structure its own measurement and payment for
performance on total cost of care independent of other incentive
programs, we believe we could also use our authority under section
1899(b)(3)(D) of the Act to accomplish this result. Section
1899(b)(3)(D) of the Act allows us to incorporate reporting
requirements under section 1848 into the reporting requirements for the
Shared Savings Program, as we determine appropriate, and to use
alternative criteria than would otherwise apply. Thus, we believe that
section 1899(b)(3)(D) of the Act also provides authority to reduce the
weight of the cost performance category to zero percent for eligible
clinicians participating in Shared Savings Program ACOs and to
redistribute the 10 percent weight to the improvement categories and
advancing care information categories.
(8) Shared Savings Program--Improvement Activities and Advancing Care
Information Performance Category Scoring Under the APM Scoring Standard
We proposed that MIPS eligible clinicians participating in the
Shared Savings Program would submit data for the MIPS improvement
activities and advancing care information performance categories
through their respective ACO participant billing TINs independent of
the Shared Savings Program ACO. Under section 1848(q)(5)(C)(ii) of the
Act, all ACO participant group billing TINs would receive a minimum of
one half of the highest possible score for the improvement activities
performance category. Additionally, under section 1848(q)(5)(C)(i) of
the Act, any ACO participant TIN that is determined to be a patient-
centered medical home or comparable specialty practice will receive the
highest potential score for the improvement activities performance
category. The improvement activities and advancing care information
scores from all the ACO participant billing TINs would be averaged to a
weighted mean MIPS APM Entity group level score. We proposed to use a
weighted mean in computing the overall improvement activities and
advancing care information quality performance category score to
account for difference in the size of each TIN and to allow each TIN to
contribute to the overall score based on its size. Then all MIPS
eligible clinicians in the APM Entity group, as identified by their APM
participant identifiers, would receive that APM Entity score. The
weights used for each ACO participant billing TIN would be the number
of MIPS eligible clinicians in that TIN. Because all providers and
suppliers that bill through the TIN of an ACO participant are required
to agree to participate in the ACO, all MIPS eligible clinicians that
bill through the TIN of an ACO participant are considered to be
participating in the ACO. Any Shared Savings Program ACO participant
billing TIN that does not submit data for the MIPS improvement
activities and/or advancing care information performance categories
would contribute a score of zero for each performance category for
which it does not report; and that score would be incorporated into the
resulting weighted average score for the Shared Savings Program ACO.
All MIPS eligible clinicians in the ACO (the APM Entity group) would
receive the same score that is calculated at the ACO level (the APM
Entity level).
In the proposed rule, we recognized that the Shared Savings Program
eligible clinicians participate as a complete TIN because all of the
eligible clinicians that have reassigned their Medicare billing rights
to the TIN of an ACO participant must agree to participate in the
Shared Savings Program. This is different from other APMs, which may
include APM Entity groups with eligible clinicians who share a billing
TIN with other eligible clinicians who do not participate in the APM
Entity. We solicited comment on a possible alternative approach in
which improvement activities and advancing care information performance
category scores would be applied to all MIPS eligible clinicians at the
individual billing TIN level, as opposed to aggregated to the ACO
level, for Shared Savings Program participants. We also indicated that
if MIPS APM scores were applied to each TIN in an ACO at the TIN level,
we would also likely need to permit those TINs to make the Partial QP
election, as discussed elsewhere in this final rule with comment, at
the TIN level. We proposed that under the APM scoring standard, the
ACO-level APM Entity group score would be applied to each participating
MIPS eligible clinician to determine the MIPS payment adjustment. We
explained that we believe calculating the score at the APM Entity level
mirrors the way APM participants are assessed for their shared savings
and other incentive payments in the APM, but we understand there may be
reasons why a group TIN, particularly one that believes it would
achieve a higher score than the weighted average APM Entity level
score, would prefer to be scored in the improvement activities and
advancing care information performance categories at the level of the
group billing TIN rather than the ACO (APM Entity level).
We solicited comment as to whether Shared Savings Program ACO
eligible clinicians should be scored at the ACO level or the group
billing TIN level for the improvement activities and advancing care
information performance categories.
The following is a summary of the comments we received regarding
our proposals for how to score and weight the improvement activities
and
[[Page 77259]]
advancing care information performance categories for the Shared
Savings Program under the APM scoring standard and on whether to score
these two MIPS performance categories at the APM Entity or the ACO
participant TIN level.
Comment: Several commenters suggested that all APM Entities should
receive full credit for improvement activities because they are already
performing these activities as a result of being a participant in an
APM. A few commenters stated that all APM participants should get at
least 80 percent of the maximum score for improvement activities. Some
commenters suggested that ACOs are involved in many of the improvement
activities on a daily basis in order to meet the stringent requirements
of the Shared Savings Program and the Next Generation ACO Model and
requested that CMS provide a simple and straightforward way for ACOs to
attest that their eligible clinicians have been involved in improvement
activities for at least 90 days in the performance year by being a part
of an ACO initiative.
Response: We agree with the comments that eligible clinicians
participating in the Shared Savings Program and other MIPS APMs are
actively engaged in improvement activities by virtue of participating
in an APM. In an effort to further reduce reporting burden for eligible
clinicians in MIPS APMs and to better recognize improvement activities
work performed through participation in MIPS APMs, we are modifying our
proposal with respect to scoring for the improvement activities
performance category under the APM scoring standard. Specifically, for
APM Entity groups in the Shared Savings Program, Next Generation ACO
Model and other MIPS APMs, we will assign a baseline score for the
improvement activities performance category based on the improvement
activity requirements under the terms of the particular MIPS APM. CMS
will review the MIPS APM requirements as they relate to activities
specified under the generally applicable MIPS improvement activities
performance category and assign an improvement activities score for
each MIPS APM that is applicable to all APM Entity groups participating
in the MIPS APM. To develop the improvement activities score assigned
to a MIPS APM and applicable to all APM Entity groups in the APM, CMS
will compare the requirements of the MIPS APM with the list of
improvement activities measures in section II.E.5.f. of this final rule
with comment period and score those measures in the same manner that
they are otherwise scored for MIPS eligible clinicians according to
section II.E.5.f. of this final rule with comment period. Thus, points
assigned to an APM Entity group in a MIPS APM under the improvement
activities performance category will relate to documented requirements
under the terms and conditions of the MIPS APM, such as in a
participation agreement or regulation. We will apply this improvement
activities score for the MIPS APM to each APM Entity group within the
MIPS APM. For example, points assigned in the improvement activities
performance category for participation in the Next Generation ACO Model
will relate to documented requirements under the terms of the model, as
set forth in the model's participation agreement. In the event that a
MIPS APM incorporates sufficient improvement activities to receive the
maximum score, APM Entity groups or their constituent MIPS eligible
clinicians (or TINs) participating in the MIPS APM will not need to
submit data for the improvement activities performance category in
order to receive that maximum improvement activities score. In the
event that a MIPS APM does not incorporate sufficient improvement
activities to receive the maximum potential score, APM Entities will
have the opportunity to report and add points to the baseline MIPS APM-
level score on behalf of all MIPS eligible clinicians in the APM Entity
group for additional improvement activities that would apply to the APM
Entity level improvement activities performance category score. The
improvement activities performance category score we assign to the MIPS
APM based on improvement activity requirements under the terms of the
APM will be published in advance of the MIPS performance period on the
CMS Web site.
Comment: A commenter generally agreed with the proposed reweighting
of performance categories for MIPS APMs under the APM scoring standard
but recommended the 10 percent for the cost performance category be
reallocated to improvement activities instead of both improvement
activities and advancing care information. Another commenter also
agreed with the scoring and supported the weight for the improvement
activities performance category. One commenter recommended that MIPS
APM participants have the option of having the APM Entity report
improvement activities in order to achieve group scores higher than the
initial 50 percent. A few commenters requested that the MIPS APMs only
be scored on the quality and improvement activities performance
categories.
Response: After considering comments, we believe the reweighting of
the improvement activities and the advancing care information
performance categories should be finalized as proposed. We believe the
proposed weights represent an appropriate balance between improvement
activities and advancing care information, both of which are important
goals of the MIPS program. Moreover, because the quality performance
category weight will be reduced over time we believe that increasing
the quality performance category weight in the first performance period
would be incongruent the balance of the weights set forth in the
statute.
For the Shared Savings Program we are finalizing the weights
assigned to each of the MIPS performance categories as proposed for
Shared Savings Program ACOs: Quality 50 percent; cost 0 percent;
improvement activities 20 percent; and advancing care information 30
percent for purposes of the APM scoring standard. We are finalizing the
proposal that for the advancing care information performance category,
ACO participant TINs will report the category to MIPS, and the TIN
scores will be aggregated and weighted in order to calculate one APM
Entity score for the category. In the event a Shared Savings Program
ACO fails to satisfy quality reporting requirements for measures
reported through the CMS Web Interface, advancing care information
group TIN scores will not be aggregated to the APM Entity level.
Instead, each ACO participant TIN will be scored separately based on
its TIN-level group reporting for the advancing care information
performance category.
We are revising our proposal with respect to the scoring of the
improvement activities performance category for the Shared Savings
Program. We will assign an improvement activities score for the Shared
Savings Program based on the improvement activities required under the
Shared Savings Program. We consider all Shared Savings Program tracks
together for purposes of assigning an improvement activities
performance category score because the tracks all require the same
activities of their participants. All APM Entity groups in the Shared
Savings Program will receive that baseline improvement activities
score. To develop the improvement activities score for the Shared
Savings Program, we will compare the requirements of the Shared Savings
Program with the list of improvement activities measures in section
II.E.5.f. of
[[Page 77260]]
this final rule with comment period and score those measures in the
same manner that they would otherwise be scored for MIPS eligible
clinicians according to section II.E.5.f. of this final rule with
comment period. We will assign points for improvement activities toward
the score for the Shared Savings Program based on documented
requirements for improvement activities under the terms of the Shared
Savings Program. We will publish the assigned scores for Shared Savings
Program on the CMS Web site before the beginning of the MIPS
performance period. In the event that the assigned score represents the
maximum improvement activities score, APM Entity groups will not need
to report additional improvement activities. In the event that the
assigned score does not represent the maximum improvement activities
score, APM Entities will have the opportunity to report additional
improvement activities that would apply to the APM Entity group score.
Table 11 summarizes the finalized APM scoring standard rules for the
Shared Savings Program.
Table 11--APM Scoring Standard for the Shared Savings Program--2017 Performance Period for the 2019 Payment
Adjustment
----------------------------------------------------------------------------------------------------------------
Performance
MIPS performance category APM entity submission Performance score category
requirement weight %
----------------------------------------------------------------------------------------------------------------
Quality....................... Shared Savings Program ACOs The MIPS quality performance 50
submit quality measures to the category requirements and
CMS Web Interface on behalf of benchmarks will be used to
their participating MIPS determine the MIPS quality
eligible clinicians. performance category score at
the ACO level.
Cost.......................... MIPS eligible clinicians will N/A............................ 0
not be assessed on cost.
Improvement Activities........ ACOs only need to report if the CMS will assign the same 20
CMS-assigned improvement improvement activities score
activities scores is below the to each APM Entity group based
maximum improvement activities on the activities required of
score. participants in the Shared
Savings Program. The minimum
score is one half of the total
possible points. If the
assigned score does not
represent the maximum
improvement activities score,
ACOs will have the opportunity
to report additional
improvement activities to add
points to the APM Entity group
score.
Advancing Care Information.... All ACO participant TINs in the All of the ACO participant TIN 30
ACO submit under this category scores will be aggregated as a
according to the MIPS group weighted average based on the
reporting requirements. number of MIPS eligible
clinicians in each TIN to
yield one APM Entity group
score.
----------------------------------------------------------------------------------------------------------------
(9) Next Generation ACO Model--Quality Performance Category Scoring
Under the APM Scoring Standard
We proposed that beginning with the first MIPS performance period,
Next Generation ACOs would only need to submit their quality measures
to CMS once using the CMS Web Interface through the same process that
they use to report to the Next Generation ACO Model. These data would
be submitted once but used for purposes of both the Next Generation ACO
Model and MIPS. Next Generation ACO Model ACOs have used the CMS Web
Interface for submitting their quality measures since the model's
inception and would most likely continue to use the CMS Web Interface
as the submission method in future years. The Next Generation ACO Model
quality measure data reported to the CMS Web Interface would be used by
CMS to calculate the MIPS APM quality performance score. The MIPS
quality performance category requirements and performance benchmarks
for reporting quality measures via the CMS Web Interface would be used
to determine the MIPS quality performance category score at the ACO
level for the APM Entity group. The Next Generation ACO Model quality
performance data that are not submitted to the CMS Web Interface, for
example the CAHPS survey and claims-based measures, would not be
included in the APM Entity group quality performance score. The APM
Entity group quality performance category score would be calculated
using only quality measure data submitted through the CMS Web Interface
and scored using the MIPS benchmarks, whereas the quality reporting
requirements and performance benchmarks calculated for the Next
Generation ACO Model would continue to be used to assess the ACO under
the APM-specific requirements. We stated in the proposed rule that we
believe this approach would reduce the reporting burden for Next
Generation ACO Model participants by requiring quality measure data to
be submitted only once and used for both MIPS and the Next Generation
ACO Model.
In the proposed rule, we indicated that we believe that no waivers
are necessary here because the quality measures submitted via the CMS
Web Interface under the Next Generation ACO Model are MIPS quality
measures and would be scored under MIPS performance standards. In the
event that Next Generation ACO Model quality measures depart from MIPS
measures in the future, we stated that we would address such changes,
including whether further waivers are necessary, at such a time in
future rulemaking.
The following is a summary of the comments we received regarding
our proposal to have Next Generation ACOs report quality measures to
MIPS using the CMS Web Interface as they normally would under Next
Generation ACO Model rules and our proposal for CMS to calculate the
MIPS quality performance category score at the APM Entity group level
based on the data reported to the CMS Web Interface and using the MIPS
performance standards.
Comment: A commenter requested clarification regarding whether the
population-based quality measures and CAHPS would be included in the
Next Generation ACO quality performance category score.
Response: The population-based quality measures and CAHPS will not
be included in the quality scoring under the APM scoring standard. This
final rule with comment period does not affect APM-specific measurement
and incentives.
[[Page 77261]]
We are finalizing the scoring policy for the quality performance
category for the Next Generation ACO Model as proposed. We will use
Next Generation ACO Model quality measures submitted by the ACO to the
CMS Web Interface and MIPS benchmarks to score quality for MIPS
eligible clinicians in a Next Generation ACO at the APM Entity level.
An ACO's failure to report quality as required by the Next Generation
ACO Model will result in a quality score of zero for the APM Entity
group.
(10) Next Generation ACO Model--Cost Performance Category Scoring Under
the APM Scoring Standard
We proposed that for the first MIPS performance period, we would
not assess MIPS eligible clinicians in the Next Generation ACO Model
participating in the MIPS APM under the cost performance category. We
proposed this approach because: (1) MIPS eligible clinicians
participating in the Next Generation ACO Model are already subject to
cost and utilization performance assessments under the APM; (2) the
Next Generation ACO Model measures cost in terms of an objective,
absolute total cost of care expenditure benchmark for a population of
attributed beneficiaries, and participating ACOs may share savings and/
or losses based on that standard, whereas the MIPS cost measures are
relative measures such that clinicians are graded relative to their
peers and therefore different than assessing total cost of care for a
population of attributed beneficiaries; and (3) the beneficiary
attribution methodologies for measuring cost under the Next Generation
ACO Model and MIPS differ, leading to an unpredictable degree of
overlap (for eligible clinicians and for us) between the sets of
beneficiaries for which eligible clinicians would be responsible that
would vary based on unique APM Entity characteristics such as which and
how many eligible clinicians comprise an ACO. We believe that with an
APM Entity's finite resources for engaging in efforts to improve
quality and lower costs for a specified beneficiary population, the
population identified through the Next Generation ACO Model must take
priority to ensure that the goals and model evaluation associated with
the APM are as clear and free of confounding factors as possible. The
potential for different, conflicting results across the Next Generation
ACO Model and MIPS assessments--due to the differences in attribution,
the inclusion in MIPS of episode-based measures that do not reflect the
total cost of care, and the objective versus relative assessment
factors listed above--creates uncertainty for eligible clinicians who
are attempting to strategically transform their respective practices
and succeed under the terms of the Next Generation ACO Model. For
example, Next Generation ACOs are held accountable for expenditure
benchmarks that reflect the total Medicare Parts A and B spending for
their attributed beneficiaries, whereas many of the proposed MIPS cost
measures focus on spending for particular episodes of care or clinical
conditions. Therefore, we proposed to reduce the MIPS cost performance
category weight to zero for all MIPS eligible clinicians participating
in the Next Generation ACO Model. Accordingly, under section
1115A(d)(1) of the Act, we proposed to waive--for MIPS eligible
clinicians participating in the Next Generation ACO Model--the
requirement under section 1848(q)(5)(E)(i)(II) of the Act that
specifies the scoring weight for the cost performance category. With
the proposed reduction of the cost performance category weight to zero,
we believe it would be unnecessary to specify and use cost measures in
determining the MIPS final score for these MIPS eligible clinicians.
Therefore, under section 1115A(d)(1) of the Act, we proposed to waive--
for MIPS eligible clinicians participating in the Next Generation ACO
Model--the requirements under sections 1848(q)(2)(B)(ii) and
1848(q)(2)(A)(ii) of the Act to specify and use, respectively, cost
measures in calculating the MIPS final score for such eligible
clinicians.
Given the proposal to waive requirements under section
1848(q)(5)(E) of the Act to reduce the weight of the cost performance
category to zero, we must subsequently specify how that weight would be
redistributed among the remaining performance categories to maintain a
total weight of 100 percent. We proposed to redistribute the cost
performance category weight to both the improvement activities and
advancing care information performance categories as specified in Table
13 of the proposed rule. The MIPS cost performance category is proposed
to have a weight of 10 percent. Because the MIPS quality performance
category bears a relatively higher weight than the other three MIPS
performance categories and the weight for this category will be reduced
from 50 to 30 percent as of the 2021 payment year, we proposed to
evenly redistribute the 10 percent cost weight to the improvement
activities and advancing care information performance categories so
that the distribution does not change the relative weight of the
quality performance category in the opposite of the direction it will
change in the future. Because the quality performance category weight
is required under the statute to be reduced to 30 percent after the
first 2 years of MIPS we believe that increasing the quality
performance category weight is incongruous with the eventual balance of
the weights set forth in the statute. The redistributed cost
performance category weight of 10 percent would result in a 5
percentage point increase (from 15 to 20 percent) for the improvement
activities performance category and a 5 percentage point increase (from
25 to 30 percent) for the advancing care information performance
category. We invited comments on the proposed redistributed weights and
specifically on whether we should also increase the MIPS quality
performance category weight.
In the proposed rule, we explained that we understand that as the
MIPS cost performance category evolves over time, there might be
greater potential for alignment and less potential duplication or
conflict with MIPS cost measurement for MIPS eligible clinicians
participating in MIPS APMs such as the Next Generation ACO Model. We
stated that we would continue to monitor and consider how we might
incorporate an assessment in the MIPS cost performance category into
the APM scoring standard for the Next Generation ACO Model. We also
understand that reducing the cost weight to zero and redistributing the
weight to the improvement activities and advancing care information
performance categories could, to the extent that improvement activities
and advancing care information performance category scores are higher
than the scores MIPS eligible clinicians would have received under the
cost performance category, result in higher final scores on average for
MIPS eligible clinicians in APM Entity groups participating in the Next
Generation ACO Model. We solicited comment on the possible alternative
of assigning a neutral score to APM Entity groups participating in the
Next Generation ACO model for the cost performance category in order to
moderate APM Entity scores. We also generally sought comment on our
proposed policy, and on whether and how we should incorporate the cost
performance category into the APM scoring standard for MIPS eligible
clinicians in APM Entity groups participating in the Next Generation
ACO model for future years.
[[Page 77262]]
The following is a summary of the comments we received regarding
our proposal to reduce the MIPS cost performance category weight to
zero for APM Entity groups in the Next Generation ACO Model.
Comment: Many commenters supported our proposal to not assess cost
for MIPS APMs, including the Next Generation ACO Model.
Response: We appreciate commenters' widespread support for this
proposal. While we will continue to monitor and consider how we might
in future years incorporate the MIPS cost performance category into the
APM scoring standard for participants in the Next Generation ACO Model,
we believe that assessment in this category would conflict with Next
Generation ACO Model assessment at this time. Participants in the Next
Generation ACO Model are assessed through particular attribution and
benchmarking methodologies for purposes of earning shared savings
payments; adding additional and separate MIPS incentives around cost
would be redundant, potentially confusing, and could undermine the
incentives built into the Next Generation ACO Model.
We are finalizing our proposal to reduce the cost performance
category weight to zero for MIPS eligible clinicians in APM Entity
groups participating in the Next Generation ACO Model and to evenly
redistribute the 10 percent cost weight to the improvement activities
and advancing care information performance categories without changes.
(11) Next Generation ACO Model--Improvement Activities and Advancing
Care Information Performance Category Scoring Under the APM Scoring
Standard
We proposed that all MIPS eligible clinicians participating in the
Next Generation ACO Model would submit data for the improvement
activities and advancing care information performance categories. MIPS
eligible clinicians participating in the Next Generation ACO Model may
bill through a TIN that includes other MIPS eligible clinicians not
participating in the APM. Therefore for both the improvement activities
and advancing care information performance categories, we proposed that
MIPS eligible clinicians participating in the Next Generation ACO Model
would submit individual level data to MIPS and not group level data.
For both the improvement activities and advancing care information
performance categories, the scores from all of the individual MIPS
eligible clinicians in the APM Entity group would be aggregated to the
APM Entity level and averaged for a mean score. Any individual MIPS
eligible clinicians that do not report for purposes of the improvement
activities performance category or the advancing care information
performance category would contribute a score of zero for that
performance category in the calculation of the APM Entity score. All
MIPS eligible clinicians in the APM Entity group would receive the same
APM Entity score.
Because the MIPS quality performance category bears a relatively
higher weight than the other three MIPS performance categories, we
proposed to evenly redistribute the 10 percent cost performance
category weight to the improvement activities and advancing care
information performance categories. Section 1848(q)(5)(C)(i) of the Act
requires that MIPS eligible clinicians who are in a practice that is
certified as a patient-centered medical home or comparable specialty
practice, as determined by the Secretary, for a performance period
shall be given the highest potential score for the improvement
activities performance category. Accordingly, a MIPS eligible clinician
participating in an APM Entity that meets the definition of a patient-
centered medical home or comparable specialty practice will receive the
highest potential improvement activities score. Additionally, section
1848(q)(5)(C)(ii) of the Act requires that MIPS eligible clinicians
participating in APMs that are not patient-centered medical homes for a
performance period shall earn a minimum score of one-half of the
highest potential score for improvement activities.
For the APM scoring standard for the first MIPS performance period,
we proposed to weight the improvement activities and advancing care
information performance categories for the Next Generation ACO Model in
the same way that we proposed to weight those categories for the Shared
Savings Program: 20 percent and 30 percent for improvement activities
and advancing care information, respectively. We solicited comment on
our proposals for reporting and scoring the improvement activities and
advancing care information performance categories under the APM scoring
standard. In particular, we solicited comment on the appropriate weight
distributions in the first performance year.
The following is a summary of the comments we received regarding
our proposals to score and weight the improvement activities and
advancing care information performance categories for APM Entity groups
in the Next Generation ACO under the APM scoring standard.
Comment: Several commenters suggested that all APM Entities
including ACOs in the Next Generation ACO Model should receive full
credit for improvement activities because they are already performing
these activities as a result of being a participant in an APM. Some
commenters also indicated that improvement activities should be
reported at the APM Entity level rather than at the individual level
then averaged. A few commenters believed that CMS should allow
reporting at the APM Entity level for all performance categories. Some
commenters also believed that the advancing care information
performance category should not be part of the APM scoring standard but
rather incorporated into APM design through CEHRT requirements. One
commenter indicated that the activities that lead to success in the
Next Generation ACO Model directly overlap with the proposed
improvement activities.
Response: We agree that we can streamline reporting and scoring for
the improvement activities and advancing care information performance
categories while recognizing the work Next Generation ACO Model
participants do in pursuit of the APM goals. Therefore, as described
below, for purposes of the APM scoring standard we will assign an
improvement activities score to the Next Generation ACO Model based on
the improvement activities required under the Model.
Regarding the advancing care information performance category, we
do not believe that there is a compelling reason to exclude assessment
in this performance category from the APM scoring standard in the same
way that we are reducing the weight of the cost performance category.
We do not see advancing care information measurement as duplicative or
in conflict with Next Generation ACO Model goals and requirements.
Participation in the Next Generation ACO Model is aligned with many
MIPS improvement activities measures. This is why we are finalizing a
policy that further reduces MIPS reporting burdens for Next Generation
ACO Model participants and recognizes the similarities between MIPS
improvement activities and the requirements of participating in the
Next Generation ACO Model.
Comment: A commenter requested clarification of our proposal that
MIPS eligible clinicians participating in the Next Generation ACO would
submit data for the improvement activities
[[Page 77263]]
performance category to MIPS individually, and not as a group.
Response: The proposed policy involved individual reporting of
improvement activities, which would be averaged across the ACO for one
APM Entity group score. The finalized policy, described below, no
longer requires individual reporting for purposes of the improvement
activities performance category.
Comment: A commenter noted that Next Generation ACO participants
who are determined to be Partial QPs for a year may be disadvantaged
given the reweighting of MIPS categories under the APM scoring
standard.
Response: We do not believe there is a disadvantage for Partial QPs
who achieve that status through participation in any Advanced APM,
including the Next Generation ACO Model to the extent it is determined
to be an Advanced APM. As discussed in section II.F.5., the eligible
clinicians who are Partial QPs can decide at the APM Entity group level
to be subject to the MIPS reporting requirements and payment
adjustment, in which case the eligible clinicians in the group would be
scored under the APM scoring standard, or to be excluded from MIPS for
the year.
In response to comments, we are revising our proposal with respect
to the scoring of the improvement activities performance category for
the Next Generation ACO Model. CMS will assign all APM Entity groups in
the Next Generation ACO Model the same improvement activities score
based on the improvement activities required by the Next Generation ACO
Model. To develop the improvement activities score assigned to all APM
Entity groups in the Next Generation ACO Model, CMS will compare the
requirements under the Next Generation ACO Model with the list of
improvement activities measures in section II.E.5.f. of this final rule
with comment period and score those measures in the same manner that
they are otherwise scored for MIPS eligible clinicians according to
section II.E.5.f. of this final rule with comment period. Thus, points
assigned for participation in the Next Generation ACO Model will relate
to documented requirements under the terms of the Next Generation ACO
Model. We will publish the assigned improvement activities performance
category score for the Next Generation ACO Model, based on the APM's
improvement activity requirements, prior to the start of the
performance period. In the event that the assigned score does not
represent the maximum improvement activities score, APM Entities will
have the opportunity to report additional improvement activities that
would be applied to the baseline APM Entity group score. In the event
that the baseline assigned score represents the maximum improvement
activities score, APM Entities will not need to report additional
improvement activities.
In order to further reduce reporting burden and align with the
generally applicable MIPS group reporting option, we are revising the
advancing care information scoring policy for the Next Generation ACO
Model. A MIPS eligible clinician may receive a score for the advancing
care information performance category either through individual
reporting or through group reporting based on a TIN according to the
generally applicable MIPS reporting and scoring rules for the advancing
care information performance category, described in section II.E.5.g of
this final rule with comment period. We will attribute one advancing
care information score to each MIPS eligible clinician in an APM Entity
by looking at both individual and group data submitted for a MIPS
eligible clinician and using the highest reported score. Thus, instead
of only using individual scores to derive an APM Entity-level advancing
care information score as proposed, we will use the highest score
attributable to each MIPS eligible clinician in an APM Entity group in
order to determine the APM Entity group score based on the average of
the highest scores for all MIPS eligible clinicians in the APM Entity
group.
Like the proposed policy, each MIPS eligible clinician in the APM
Entity group will receive one score, weighted equally with that of the
other clinicians in the group, and CMS will calculate a single APM
Entity-level advancing care information performance category score.
Also like the proposed policy, for a MIPS eligible clinician who has no
advancing care information performance category score--if the
individual's TIN did not report as a group and the individual did not
report--that MIPS eligible clinician will contribute a score of zero to
the aggregate APM Entity group score.
In summary, we will attribute one advancing care information
performance category score to each MIPS eligible clinician in an APM
Entity group, which will be averaged with the scores of all other MIPS
eligible clinicians in the APM Entity group to derive a single APM
Entity score. In attributing a score to an individual, we will use the
highest score attributable to the TIN/NPI combination of a MIPS
eligible clinician. Finally, if there is no group or individual score,
we will attribute a zero to the MIPS eligible clinician, which will be
included in the aggregate APM Entity score.
We have revised this policy for the advancing care information
performance category for Next Generation ACOs under the APM scoring
standard because we recognize that individual reporting in the
advancing care information performance category for all MIPS eligible
clinicians in an APM Entity group may be more burdensome than allowing
some degree of group reporting where applicable, and we believe that
requiring individual reporting on advancing care information in the
Next Generation ACO Model context will not supply a meaningfully
greater amount of information regarding the use of EHR technology as
prescribed by the advancing care information performance category. We
believe that this revised policy maintains the alignment with the
generally applicable MIPS reporting and scoring requirements under the
advancing care information performance category while responding to
commenters' desires for reduced reporting requirements for MIPS APM
participants. Therefore, we believe that the revised policy, relative
to the proposed policy, has the potential to substantially reduce
reporting burden with little to no reduction in our ability to
accurately evaluate the adoption and use of EHR technology. We also
believe this final policy balances the simplicity of TIN-level group
reporting, which can reduce burden, with the flexibility needed to
address partial TIN scenarios common among Next Generation ACOs in
which a TIN may have some MIPS eligible clinicians participating in the
ACO and some MIPS eligible clinicians not in the ACO. Table 12
summarizes the final APM scoring standard rules for the Next Generation
ACO Model.
[[Page 77264]]
Table 12--APM Scoring Standard for the Next Generation ACO Model--2017 Performance Period for the 2019 Payment
Adjustment
----------------------------------------------------------------------------------------------------------------
Performance
MIPS Performance category APM Entity submission Performance score category
requirement weight %
----------------------------------------------------------------------------------------------------------------
Quality.......................... ACOs submit quality The MIPS quality performance 50
measures to the CMS Web category requirements and
Interface on behalf of benchmarks will be used to
their participating MIPS determine the MIPS quality
eligible clinicians. performance category score at the
ACO level.
Cost............................. MIPS eligible clinicians N/A............................... 0
will not assessed on
cost.
Improvement Activities........... ACOs only need to report CMS will assign the same 20
improvement activities improvement activities score to
data if the CMS-assigned each APM Entity group based on
improvement activities the activities required of
scores is below the participants in the Next
maximum improvement Generation ACO Model.
activities score. This minimum score is one half of
the total possible points. If the
assigned score does not represent
the maximum improvement
activities score, ACOs will have
the opportunity to report
additional improvement activities
to add points to the APM Entity
group score.
Advancing Care Information....... Each MIPS eligible CMS will attribute one score to 30
clinician in the APM each MIPS eligible clinician in
Entity group reports the APM Entity group. This score
advancing care will be the highest score
information to MIPS attributable to the TIN/NPI
through either group combination of each MIPS eligible
reporting at the TIN clinician, which may be derived
level or individual from either group or individual
reporting. reporting. The scores attributed
to each MIPS eligible clinicians
will be averaged to yield a
single APM Entity group score.
----------------------------------------------------------------------------------------------------------------
(12) MIPS APMs Other Than the Shared Savings Program and the Next
Generation ACO Model--Quality Performance Category Scoring Under the
APM Scoring Standard
For MIPS APMs other than the Shared Savings Program and the Next
Generation ACO Model, we proposed that eligible clinicians or APM
Entities would submit APM quality measures under their respective MIPS
APM as usual, and those eligible clinicians or APM Entities would not
also be required to submit quality information under MIPS for the first
performance period. Current MIPS APMs have requirements regarding the
number of quality measures, measure specifications, as well as the
measure reporting method(s) and frequency of reporting, and have an
established mechanism for submission of these measures to us. We
believe there are operational considerations and constraints that would
prevent us from being able to use the quality measure data from some
MIPS APMs for the purpose of satisfying the MIPS data submission
requirements for the quality performance category in the first
performance period. For example, some current APMs use a quality
measure data collection system or vehicle that is separate and distinct
from the MIPS systems. We do not believe there is sufficient time to
adequately implement changes to the current APM quality measure data
collection timelines and infrastructure to conduct a smooth hand-off to
the MIPS system that would enable use of APM quality measure data to
satisfy the MIPS quality performance category requirements in the first
MIPS performance period. As we have noted, we are concerned about
subjecting MIPS eligible clinicians who participate in MIPS APMs to
multiple performance assessments--under MIPS and under the APMs--that
are not necessarily aligned and that could potentially undermine the
validity of testing or performance evaluation under the APM. As stated
in the proposed rule, our goal is to reduce MIPS eligible clinician
reporting burden by not requiring APM participants to report quality
data twice to us, and to avoid misaligned performance incentives.
Therefore, we proposed that, for the first MIPS performance period
only, for MIPS eligible clinicians participating in APM Entity groups
in MIPS APMs (other than the Shared Savings Program or the Next
Generation ACO Model), we would reduce the weight for the quality
performance category to zero. As we explained in the proposed rule, we
believe it is necessary to do this because we require additional time
to make adjustments in systems and processes related to the submission
and collection of APM quality measures to align APM quality measures
with MIPS and ensure APM quality measure data can be submitted in a
time and manner sufficient for use in assessing quality performance
under MIPS and under the APM. Additionally, due to the implementation
of a new program that does not account for non-MIPS measures sets, the
operational complexity of connecting APM performance to valid MIPS
quality performance category scores in the necessary timeframe, as well
as the uncertainty of the validity and equity of scoring results could
unintentionally undermine the quality performance assessments in MIPS
APMs. Finally, for purposes of performing valid evaluations of MIPS
APMs, we must reduce the number of confounding factors to the extent
feasible, which, in this case, would include reporting and assessment
on non-APM quality measures. Thus, we proposed to waive certain
requirements of section 1848(q) of the Act for the first MIPS
performance year to avoid risking adverse operational or program
evaluation consequences for MIPS APMs while we work toward
incorporating MIPS APM quality measures into MIPS scoring for future
MIPS performance periods.
Accordingly, under section 1115A(d)(1) of the Act, we proposed to
waive--for MIPS eligible clinicians participating in MIPS APMs other
than the Shared Savings Program or the Next Generation ACO Model--the
requirement under section 1848(q)(5)(E)(i)(I) of the Act that specifies
the scoring weight for the quality performance category. With the
proposed reduction of the quality performance category weight to zero,
we believe it would be unnecessary to establish an annual final list of
quality measures as required under section 1848(q)(2)(D) of the Act, or
to specify
[[Page 77265]]
and use quality measures in determining the MIPS final score for these
MIPS eligible clinicians. Therefore, under section 1115A(d)(1) of the
Act, we proposed to waive-- for MIPS eligible clinicians participating
in MIPS APMs other than the Shared Savings Program or the Next
Generation ACO Model--the requirements under sections 1848(q)(2)(D),
1848(q)(2)(B)(i) and 1848(q)(2)(A)(i) of the Act to establish a final
list of quality measures (using certain criteria and processes); and to
specify and use, respectively, quality measures in calculating the MIPS
final score, for these MIPS eligible clinicians.
We anticipated that beginning in the second MIPS performance
period, the APM quality measure data submitted to us during the MIPS
performance period would be used to derive a MIPS quality performance
score for APM Entities in all APMs that meet criteria for application
of the APM scoring standard. We also anticipated that it may be
necessary to propose policies and waivers of different requirements of
the statute--such as one for section 1848(q)(2)(D) of the Act, to
enable the use of non-MIPS quality measures in the quality performance
category score--through future rulemaking. We indicated that we expect
that by the second MIPS performance period we will have had sufficient
time to resolve operational constraints related to use of separate
quality measure systems and to adjust quality measure data submission
timelines. Therefore, beginning with the second MIPS performance
period, we anticipated that through use of the waiver authority under
section 1115A(d)(1) of the Act, the quality measure data for APM
Entities for which the APM scoring standard applies would be used for
calculation of a MIPS quality performance score in a manner specified
in future rulemaking. We solicited comment on this transitional
approach to use of APM quality measures for the MIPS quality
performance category for purposes of the APM scoring standard under
MIPS in future years.
The following is a summary of the comments we received regarding
our proposal to, for the first MIPS performance period, reweight the
quality performance category to zero for APM Entity groups in MIPS APMs
other than the Shared Savings Program or the Next Generation ACO Model.
Comment: A commenter supported exempting MIPS APMs that are not
using the CMS Web Interface to report quality from reporting for
purposes of the MIPS quality performance category in the first
performance year. One commenter was concerned that these MIPS APMs will
not receive a quality score for the first performance year and another
commenter recommended revising the performance category weights so that
quality is included.
Response: We agree that it would be ideal to include performance on
quality for all MIPS APMs in the first MIPS performance year. As noted,
we are only reweighting the quality performance category to zero for
the first performance year due to operational limitations. APM Entities
in MIPS APMs are, under the policies adopted in this final rule with
comment period, required to base payment incentives on cost/utilization
and quality measure performance. As such they will continue to report
quality as required under the APM, and are not truly exempt from
quality assessment for the year. We are finalizing the inclusion of a
MIPS quality performance category score under the APM scoring standard
for the 2018 performance year at Sec. 414.1370(f), and will develop
additional scoring policies for that year through future notice-and-
comment rulemaking.
We are finalizing as proposed the policy to reweight the MIPS
quality performance category to zero percent for APM Entity groups in
MIPS APMs other than the Shared Savings Program or the Next Generation
ACO Model for the first performance year.
(13) MIPS APMs Other Than the Shared Savings Program and Next
Generation ACO--Cost Performance Category Scoring Under the APM Scoring
Standard
For the first MIPS performance period, we proposed that, for MIPS
eligible clinicians participating in MIPS APMs other than the Shared
Savings Program or the Next Generation ACO Model, to reduce the weight
of the cost performance category to zero. We proposed this approach
because: (1) APM Entity groups are already subject to cost and
utilization performance assessments under MIPS APMs; (2) MIPS APMs
usually measure cost in terms of total cost of care, which is a broader
accountability standard that inherently encompasses the purpose of the
claims-based measures that have relatively narrow clinical scopes, and
MIPS APMs that do not measure cost in terms of total cost of care may
depart entirely from MIPS measures; and (3) the beneficiary attribution
methodologies differ for measuring cost under APMs and MIPS, leading to
an unpredictable degree of overlap (for eligible clinicians and for
CMS) between the sets of beneficiaries for which eligible clinicians
would be responsible that would vary based on unique APM Entity
characteristics such as which and how many eligible clinicians comprise
an APM Entity. We believe that with an APM Entity's finite resources
for engaging in efforts to improve quality and lower costs for a
specified beneficiary population, the population identified through an
APM must take priority to ensure that the goals and model evaluation
associated with the APM are as clear and free of confounding factors as
possible. The potential for different, conflicting results across APM
and MIPS assessments creates uncertainty for MIPS eligible clinicians
who are attempting to strategically transform their respective
practices and succeed under the terms of an APM. Accordingly, under
section 1115A(d)(1) of the Act, we proposed to waive--for MIPS eligible
clinicians participating in MIPS APMs other than the Shared Savings
Program or the Next Generation ACO Model--the requirement under section
1848(q)(5)(E)(i)(II) of the Act that specifies the scoring weight for
the cost performance category.
With the proposed reduction of the cost performance category weight
to zero, we believed it would be unnecessary to specify and use cost
measures in determining the MIPS final score for these MIPS eligible
clinicians. Therefore, under section 1115A(d)(1) of the Act, we
proposed to waive--for MIPS eligible clinicians participating in MIPS
APMs other than the Shared Savings Program or the Next Generation ACO
Model--the requirements under section under sections 1848(q)(2)(B)(ii)
and 1848(q)(2)(A)(ii) of the Act to specify and use, respectively, cost
measures in calculating the MIPS final score for such eligible
clinicians.
Given the proposal to waive requirements of section 1848(q) of the
Act to reduce the weight of the quality and cost performance categories
to zero, we also needed to specify how those weights would be
redistributed among the remaining improvement activities and advancing
care information categories in order to maintain a total weight of 100
percent. We proposed to redistribute the quality and the cost
performance category weights as specified in Table 14 of the proposed
rule.
We understand that as the cost performance category evolves, the
rationale we discussed in the proposed rule for establishing a weight
of zero for this performance category might not be applicable in future
years. We solicited comment on whether and how we should incorporate
the cost performance category into the APM scoring standard
[[Page 77266]]
under MIPS. We also understand that reducing the quality and cost
performance category weight to zero and redistributing the weight to
the improvement activities and advancing care information performance
categories could, to the extent that improvement activities and
advancing care information scores are higher than the scores MIPS
eligible clinicians would have received under the cost performance
category, would result in higher final scores on average for MIPS
eligible clinicians in APM Entity groups participating in MIPS APMs. We
solicited comment on the possible alternative of assigning a neutral
score to MIPS eligible clinicians in APM Entity groups participating in
MIPS APMs for the quality and cost performance categories in order to
moderate APM Entity scores.
The following is a summary of the comments we received regarding
our proposal to establish a MIPS cost performance category weight of
zero for all MIPS eligible clinicians in APM Entities participating in
the MIPS APMs other than the Shared Savings Program and the Next
Generation ACO model.
Comment: The majority of commenters supported not assessing cost
for MIPS APMs by reducing the weight for the cost performance category
to zero.
Response: We appreciate commenters' widespread support for this
proposal. While we will continue to monitor and consider how we might
in future years incorporate the MIPS cost performance category into the
APM scoring standard for all MIPS APMs, we believe that inclusion of
this category would conflict with the assessment of cost made within
MIPS APMs at this time. Participants in MIPS APMs are assessed through
particular attribution and benchmarking methodologies for purposes of
incentives and penalties; adding additional and separate MIPS
incentives around cost would be redundant, potentially confusing, and
could undermine the incentives built into these MIPS APMs.
We are finalizing the proposal to reduce the cost performance
category weight to zero percent for APM Entity groups in MIPS APMs
other than the Shared Savings Program or the Next Generation ACO Model.
(14) MIPS APMs Other Than the Shared Savings Program and Next
Generation ACO Model--Improvement Activities and Advancing Care
Information Performance Category Scoring Under the APM Scoring Standard
We proposed that all MIPS eligible clinicians participating in a
MIPS APM other than the Shared Savings Program or the Next Generation
ACO Model would submit data for the improvement activities and
advancing care information performance categories as individual MIPS
eligible clinicians. MIPS eligible clinicians in these other APMs may
bill through a TIN that includes MIPs eligible clinicians that do not
participate in the APM. Therefore for both the improvement activities
and the advancing care information performance categories, we proposed
that these MIPS eligible clinicians submit individual level data to
MIPS and not group level data. For both the improvement activities and
advancing care information performance categories, the scores from all
of the individual MIPS eligible clinicians in the APM Entity group
would be aggregated to the APM Entity level and averaged for a mean
score. Any individual MIPS eligible clinicians that do not submit data
for the improvement activities performance category or the advancing
care information performance category would contribute a score of zero
for that performance category in the calculation of the APM Entity
score. All MIPS eligible clinicians in the APM Entity group would
receive the same APM Entity group score.
Section 1848(q)(5)(C)(i) of the Act requires that MIPS eligible
clinicians who are in a practice that is certified as a patient-
centered medical home or comparable specialty practice, as determined
by the Secretary, for a performance period shall be given the highest
potential score for the improvement activities performance category.
Accordingly, a MIPS eligible clinician in an APM Entity group that
meets the definition of a patient-centered medical home or comparable
specialty practice will receive the highest potential score.
Additionally, section 1848(q)(5)(C)(ii) of the Act requires that MIPS
eligible clinicians participating in APMs that are not patient-centered
medical homes for a performance period shall earn a minimum score of
one-half of the highest potential score for improvement activities. We
acknowledged that using this increased weight for improvement
activities may make it easier in the first performance period for
eligible clinicians in a MIPS APM to attain a higher MIPS score. We do
not have historical data to assess the range of scores under
improvement activities because this is the first time such activities
are being assessed in such a manner.
For the advancing care information performance category, we
explained our belief that MIPS eligible clinicians participating in
MIPS APMs would be using certified health IT and other health
information technology to coordinate care and deliver better care to
their patients. Most MIPS APMs encourage participants to use health IT
to perform population management, monitor their own quality improvement
activities and, better coordinate care for their patients in a way that
aligns with the goals of the advancing care information performance
category. In the proposed rule, we indicated that we want to ensure
that where we proposed reductions in weights for other MIPS performance
categories, such weights are appropriately redistributed to the
advancing care information performance category.
Therefore, for the first MIPS performance period, we proposed that
the weights for the improvement activities and advancing care
information performance categories would be 25 percent and 75 percent,
respectively. We solicited comment on our proposals for reporting and
scoring the improvement activities and advancing care information
performance categories under the APM scoring standard. In particular,
we solicited comment on the appropriate weight distributions in the
first performance year and subsequent years when we anticipate
incorporating assessment in the quality performance category for all
MIPS eligible clinicians participating in MIPS APMs.
The following is a summary of the comments we received regarding
our proposals to score and weight the improvement activities and
advancing care information performance categories for MIPS eligible
clinicians participating in APM Entity groups in MIPS APMs other than
the Shared Savings Program and the Next Generation ACO Model under the
APM scoring standard.
Comment: Some commenters were concerned that if eligible clinicians
in MIPS APMs would be scored only on the advancing care information and
improvement activities performance categories, clinicians in those MIPS
APMs could disproportionately receive upward MIPS payment adjustments
because they would not be assessed in the quality or cost performance
categories. Commenters believed that it may be easier for clinicians to
perform well in the improvement activities and advancing care
information performance categories than in the quality and cost
performance categories. Although a few commenters supported the
proposed performance category weights, other commenters suggested
alternatives. Two commenters were concerned about the performance
category scoring weights
[[Page 77267]]
for MIPS APMs under the APM scoring standard and suggested that the
weights for the advancing care information and improvement activities
performance categories should be similar to the ones proposed for the
Shared Savings Program and Next Generation ACO Model. Two other
commenters suggested assigning greater weight to the improvement
activities performance category instead of redistributing so much of
the weight to the advancing care information performance category. A
few commenters suggested redistributing the weights from the quality
and cost performance categories to the improvement activities and
advancing care information performance categories differently--for
example, 50 percent for improvement activities and 50 percent for
advancing care information. One commenter indicated they understood the
need to reweight the improvement activities and advancing care
information for MIPS APMs other than the Shared Savings Program and the
Next Generation ACO Model but requested that, in making reweighting
decisions, CMS give consideration to ensuring a ``level playing
field.'' A few commenters expressed concern that the proposed APM
scoring standard for MIPS APMs increases the advancing care information
category weight to 75 percent, and a commenter stated that performance
in this category could be challenging for many clinicians, particularly
those with little control over the IT choices and decisions made by
their employers. A commenter recommended basing performance in this
category on the adoption and use of EHR technology tailored to a
specialty-appropriate assessment of meaningful use and urged CMS to
work closely with physician societies.
Response: We understand that an APM Entity group's final score
under the proposed weights for the APM scoring standard could differ
from the final score such APM Entity groups could receive if they were
subject to both the quality and cost performance categories. However,
for reasons discussed above, reweighting the quality performance
category to zero percent is necessary for operational and programmatic
reasons only for the first performance year, and we anticipate being
able to incorporate performance under MIPS APM quality measures
beginning in the second year of the Quality Payment Program, subject to
future rulemaking. Also, in light of the MIPS scoring policies we are
finalizing for the first performance year, we do not believe that this
will cause a material adverse impact on MIPS scoring because the impact
on MIPS payment adjustments for an eligible clinician will be affected
more by meeting the minimum reporting requirements than by the
weighting of performance categories. In subsequent years, we intend to
incorporate assessments in the quality performance category into the
APM scoring standard for all MIPS APMs, and the performance category
weights will no longer so heavily emphasize advancing care information.
For the first performance year, we believe that the proposed balance
between improvement activities and advancing care information is
appropriate, especially given the possibility that MIPS APM
participants may be assigned the maximum improvement activities score
under our final policy, as described below.
Comment: A commenter stated that improvement activities reporting
should be done by the APM Entity and that advancing care information
should not be part of the APM scoring standard. Several commenters
suggested that all APM Entities should receive full credit for
improvement activities because they are already performing these
activities as a result of being a participant in an APM. Other
commenters suggested that both advancing care information and
improvement activities be reported and scored at the individual level
instead of being aggregated to the APM Entity level. A few commenters
believed that CMS should allow reporting at the APM Entity level for
all performance categories.
Response: In contrast to the cost performance category, we do not
find a compelling reason to reduce the weight of the advancing care
information performance category because we do not believe it would
potentially conflict with or duplicate assessments that are made within
the MIPS APM.
We agree with commenters that reporting in the improvement
activities performance category could be more efficient if done by an
APM Entity on behalf of the APM Entity group. In order to further
reduce reporting burden on all parties and to better recognize
improvement activities work performed through participation in MIPS
APMs, we are modifying our proposal with respect to scoring for the
improvement activities performance category under the MIPS APM scoring
standard. As described above, we will assign an improvement activities
performance category score at the MIPS APM level based on the
requirements of participating in the particular MIPS APM. The baseline
score will be applied to each APM Entity group in the MIPS APM. In the
event that the assigned score is less than the maximum score, we would
allow the APM Entity to report additional activities to add points to
the APM Entity group score. With regards to the comment suggesting
scoring improvement activities at the individual level, we believe that
reporting and scoring improvement activities at the APM Entity level
support the goals of APM participation, which focus on collective
responsibility for the cost and quality of care for beneficiaries.
Similarly, we agree with the comments pointing out that eligible
clinicians participating in MIPS APMs are actively engaged in
improvement activities by virtue of participating in the APM.
Comment: A commenter sought clarification regarding how a subgroup
of MIPS eligible clinicians that is not participating in a MIPS APM
will be treated when other MIPS eligible clinicians in the same large
multispecialty practice participate in a MIPS APM.
Response: We maintain lists of participants that are in the MIPS
APM using the APM participant identifier, and those MIPS eligible
clinicians will be scored as an APM Entity group under the APM scoring
standard. The non-APM participants in the practice will report to MIPS
under the generally applicable MIPS requirements for reporting as an
individual or group. If the practice decides to report to MIPS as a
group under its TIN, then its reporting may include some data from the
MIPS APM participants, even though those TIN/NPI combinations will
receive their MIPS final score based on the APM Entity group according
to the scoring hierarchy in section II.E.6. of this final rule with
comment period.
We are revising the proposed improvement activities scoring policy
for MIPS APMs other than the Shared Savings Program or the Next
Generation ACO Model. CMS will assign a score for the improvement
activities performance category to each MIPS APM, and that score will
be applied to each APM Entity group in the MIPS APM. To develop the
improvement activities score for a MIPS APM, CMS will compare the
requirements of the MIPS APM with the list of improvement activities
measures in section II.E.5.f. of this final rule with comment period
and score those measures in the same manner that they are otherwise
scored for MIPS eligible clinicians according to section II.E.5.f. of
this final rule with comment period. Thus, points assigned to an APM
Entity group in a MIPS APM under the improvement activities performance
category will relate to
[[Page 77268]]
documented requirements under the terms and conditions of the MIPS APM.
We will publish the assigned improvement activities scores for each
MIPS APM on the CMS Web site prior to the beginning of the MIPS
performance period. In the event that the assigned score does not
represent the maximum improvement activities score, APM Entities will
have the opportunity to report additional improvement activities that
would apply to the APM Entity group score. In the event that the
assigned score represents the maximum improvement activities score, APM
Entity groups will not need to report additional improvement
activities.
In order to further reduce reporting burden and align with the
generally applicable MIPS group reporting option, we are also revising
the proposed advancing care information scoring policy for MIPS APMs
other than the Shared Savings Program and the Next Generation ACO
Model.
A MIPS eligible clinician may receive a score for the advancing
care information performance category either through individual
reporting or through group reporting based on a TIN according to the
generally applicable MIPS reporting and scoring rules for the advancing
care information performance category, described in section II.E.5.g.
of this final rule with comment period. We will attribute one score to
each MIPS eligible clinician in an APM Entity group by looking for both
individual and group data submitted for a MIPS eligible clinician and
using the highest score. Thus, instead of only using individual scores
to derive an APM Entity-level advancing care information score as
proposed, we will use the highest score attributable to each MIPS
eligible clinician in an APM Entity group in order to create the APM
Entity group score based on the average of the highest scores for all
MIPS eligible clinicians in the APM Entity group.
Like the proposed policy, each MIPS eligible clinician in the APM
Entity group will receive one score, weighted equally with that of the
other clinicians in the group, and we will calculate a single APM
Entity-level advancing care information score. Also like the proposed
policy, for a MIPS eligible clinician who has no advancing care
information score attributable to the individual--the individual's TIN
did not report as a group and the individual did not report--that MIPS
eligible clinician will contribute a score of zero to the aggregate APM
Entity group score.
In summary, we will attribute one advancing care information score
to each MIPS eligible clinician in an APM Entity group, which will be
averaged with the scores of all other MIPS eligible clinicians in the
APM Entity group to derive a single APM Entity score. In attributing a
score to an individual, we will use the highest score attributable to
the TIN/NPI combination of a MIPS eligible clinician. Finally, if there
is no group or individual score, we will attribute a zero to the MIPS
eligible clinician, which will be included in the aggregate APM Entity
score.
We have revised the proposed policy for the advancing care
information performance category for MIPS APM participants under the
APM scoring standard because we recognize that individual reporting in
the advancing care information performance category for all MIPS
eligible clinicians in an APM Entity group may be more burdensome than
allowing some degree of group reporting where applicable, and we
believe that requiring individual reporting on advancing care
information in the MIPS APM context will not supply a meaningfully
greater amount of information regarding the use of EHR technology as
prescribed by the advancing care information performance category. We
believe that this revised policy maintains the alignment with the
generally applicable MIPS reporting and scoring requirements under the
advancing care information performance category while responding to
commenters' desires for reduced reporting requirements for MIPS APM
participants. Therefore, we believe that the revised policy, relative
to the proposed policy, has the potential to substantially reduce
reporting burden with little to no reduction in our ability to
accurately evaluate the adoption and use of EHR technology. We also
believe this final policy balances the simplicity of TIN-level group
reporting, which can reduce burden, with the flexibility needed to
address partial TIN scenarios common among APM Entities in MIPS APMs in
which a TIN may have some MIPS eligible clinicians participating in the
APM Entity and some MIPS eligible clinicians not in the APM Entity.
Table 13 summarizes the finalized APM scoring standard rules for MIPS
APMs other than the Shared Savings Program and Next Generation ACO
Model.
Table 13--APMs Scoring Standard for MIPS APMs Other Than the Shared Savings Program and Next Generation ACO
Model--2017 Performance Period for the 2019 Payment Adjustment
----------------------------------------------------------------------------------------------------------------
Performance
MIPS Performance category APM Entity submisson Performance score category
requirement weight %
----------------------------------------------------------------------------------------------------------------
Quality....................... The APM Entity group will not N/A............................ 0
be assessed on quality under
MIPS in the first performance
period. The APM Entity will
submit quality measures to CMS
as required by the APM.
Cost.......................... MIPS eligible clinicians will N/A............................ 0
not be assessed on cost.
Improvement Activities........ APM Entities only need to CMS will assign the same 25
report improvement activities improvement activities score
data if the CMS-assigned to each APM Entity group based
improvement activities scores on the activities required of
is below the maximum participants in the MIPS APM.
improvement activities score. The minimum score if one half
of the total possible points.
If the assigned score does not
represent the maximum
improvement activities score,
APM Entities will have the
opportunity to report
additional improvement
activities to add points to
the APM Entity group score.
[[Page 77269]]
Advancing Care Information.... Each MIPS eligible clinician in CMS will attribute one score to 75
the APM Entity group reports each MIPS eligible clinician
advancing care information to in the APM Entity group. This
MIPS through either group score will be the highest
reporting at the TIN level or score attributable to the TIN/
individual reporting. NPI combination of each MIPS
eligible clinician, which may
be derived from either group
or individual reporting. The
scores attributed to each MIPS
eligible clinician will be
averaged to yield a single APM
Entity group score.
----------------------------------------------------------------------------------------------------------------
(15) APM Entity Data Submission Method
Presently, we require APM Entities in MIPS APMs to either use the
CMS Web Interface or another data submission mechanism for submitting
data on the quality measures for purposes of the APM. We are not
currently proposing to change the method used by APM Entities to submit
their quality measure data to CMS. Therefore, we expect that APM
Entities like the Shared Savings Program ACOs will continue to submit
their data on quality measures using the CMS Web Interface data
submission mechanism. Similarly, in the event that the Comprehensive
ESRD Care (CEC) Initiative is determined to be a MIPS APM, APM Entities
in the CEC would continue to submit their quality measures to CMS using
the Quality Measures Assessment Tool (QMAT) for purposes of the CEC
quality performance assessment under the APM. We proposed that all MIPS
eligible clinicians in APM Entities participating in MIPS APMs would be
required to use one of the proposed MIPS data submission mechanisms to
submit data for the advancing care information performance category.
The following is a summary of the comments we received regarding
the method used by APM Entities to submit quality data for purposes of
MIPS.
Comment: One commenter requested that all APM Entities be required
to use the QRDA III data submission method because many EHRs now
support this standard. Another commenter supported retaining the CMS
Web Interface as the submission method for quality data for APM
Entities participating in the Shared Savings Program. One commenter
suggested that the improvement activities information could be
collected via the CMS Web Interface. Another commenter suggested that
all MIPS performance categories be submitted via web-based reporting.
Some commenters communicated that MIPS eligible clinicians
participating in APMs should not have to report quality data separately
to both APMs and MIPS and another commenter suggested that MIPS APM
participants only be required to submit data for the quality and
improvement activities performance categories.
Response: We appreciate the commenter's support and suggestions. We
believe the policies that we are adopting in this final rule regarding
data submission minimize reporting burden and disruption to APM
participants and we will continue to consider new reporting methods in
the future.
Comment: A commenter recommended that the data collection processes
be standardized and data submission be minimized to the extent that
data can be used for various purposes within the Medicare program
because rural practices often have human and IT infrastructure resource
limitations.
Response: We thank the commenters for their input and believe that
the finalized policies for the APM scoring standard represent further
reductions in reporting burden and reflect our commitment to streamline
submissions wherever possible. We will continue to look for ways to
reduce reporting burdens without compromising the robustness of our
assessments.
We are finalizing without changes our proposal regarding APM Entity
data submission for the quality performance category in all MIPS APMs
and the advancing care information performance category in the Shared
Savings Program. APM Entity groups will not submit data for the
improvement activities performance category unless the improvement
activities performance category score we assign at the MIPS APM level
is less than the maximum score. In this instance, the APM Entities in
the MIPS APM would use one of the MIPS data submission mechanisms if
they opt to report additional improvement activities in order to
increase their score for the improvement activities performance
category. MIPS eligible clinicians in APM Entity groups participating
in MIPS APMs other than the Shared Savings Program may report advancing
care information performance category to MIPS using a MIPS data
submission mechanism for either group reporting at the TIN level or
individual reporting. Table 14 describes data submission methods for
the MIPS performance categories under the APM scoring standard.
Table 14--APM Entity Submission Method for Each MIPS Performance
Category
------------------------------------------------------------------------
APM Entity eligible clinician
MIPS performance category submission method
------------------------------------------------------------------------
Quality........................... The APM Entity group submits quality
measure data to CMS as required
under the APM.
Cost.............................. No data submitted by APM Entity
group to MIPS.
Improvement Activities............ No data submitted by APM Entity
group to MIPS unless the assigned
score at the MIPS APM level does
not represent the maximum
improvement activities score, in
which case the APM Entity may
report additional improvement
activities using a MIPS data
submission mechanism.
Advancing Care Information........ Shared Savings Program ACO
participant TINs submit data using
a MIPS data submission mechanism.
Next Generation ACO Model and other
MIPS APM eligible clinicians submit
data at either the individual level
or at the TIN level using a MIPS
data submission mechanism.
------------------------------------------------------------------------
[[Page 77270]]
(16) MIPS APM Performance Feedback
For the first MIPS performance feedback specified under section
1848(q)(12) of the Act to be published by July 1, 2017, we proposed
that all MIPS eligible clinicians participating in MIPS APMs would
receive the same historical information prepared for all MIPS eligible
clinicians except the report would indicate that the historical
information provided to such MIPS eligible clinicians is for
informational purposes only. MIPS eligible clinicians participating in
APMs have been evaluated for performance only under the APM. Thus,
historical information may not be representative of the scores that
these MIPS eligible clinicians would receive under MIPS.
For MIPS eligible clinicians participating in MIPS APMs, we
proposed that the MIPS performance feedback would consist only of the
scores applicable to the APM Entity group for the specific MIPS
performance period. For example, the MIPS eligible clinicians
participating in the Shared Savings Program and Next Generation ACO
Model would receive performance feedback for the quality, improvement
activities, and advancing care information performance categories for
the 2017 performance period. Because these MIPS eligible clinicians
would not be assessed for the cost performance category, information on
MIPS performance scores for the cost performance category would not be
applicable to these MIPS eligible clinicians.
We also proposed that, for the Shared Savings Program, the
performance feedback would be available to the eligible clinicians
participating in the Shared Savings Program at the group billing TIN
level. For the Next Generation ACO Model we proposed that the
performance feedback would be available to all MIPS eligible clinicians
participating in the MIPS APM Entity.
We proposed that in the first MIPS performance period, the MIPS
eligible clinicians participating in MIPS APMs other than the Shared
Savings Program or the Next Generation ACO Model would receive
performance feedback for the improvement activities and advancing care
information performance categories only, as they would not be assessed
under the quality or cost performance categories. The information such
as MIPS measure score comparisons for the quality and cost performance
categories would not be applicable to these MIPS eligible clinicians
because no such comparative data would exist. We proposed the
performance feedback for MIPS eligible clinicians participating in
these other APMs would be available for each MIPS eligible clinician
that submitted MIPS data for these performance categories under their
respective APM Entities. We invited comment on these proposals.
The following is a summary of the comments we received regarding
our proposals to provide the same historical information as those
participating in MIPS, provide feedback on scores for applicable
performance categories to the APM Entity group for the specific MIPS
performance period, and provide feedback for those participating in the
Shared Savings Program at the group TIN level and feedback for those
participating in the Next Generation ACO Model and all other MIPS APMs
at the individual level.
Comment: One commenter recommended that CMS deliver feedback to
clinicians or organizations by no later than October 1 of the reporting
year to allow the organization to make appropriate changes in care
improvement. One commenter stated that eligible clinicians
participating in APMs need timely feedback to provide a clear
understanding of patient attribution and performance measurement, and
several commenters requested that CMS give feedback more frequently
than annually during the first few years of the program.
Response: We appreciate that MIPS eligible clinicians participating
in MIPS APMs would prefer to receive feedback as early and often as
possible in order to succeed in the Quality Payment Program and
continue to improve, and we will continue to explore opportunities to
provide more frequent feedback in the future.
We are revising the proposed policy in order to maintain alignment
with the generally applicable MIPS performance feedback policies. As
noted in section II.E.8.a. of this final rule with comment period, the
September 2016 QRUR will be used to satisfy the requirement under
section 1848(q)(12)(A)(i) of the Act to provide MIPS eligible
clinicians performance feedback on the quality and cost performance
categories beginning July 1, 2017. We are finalizing a policy that all
MIPS eligible clinicians scored under the APM scoring standard will
also receive this performance feedback to the extent applicable, unless
they did not have data included in the September 2016 QRUR. MIPS
eligible clinicians without data included in the September 2016 QRUR
will not receive performance feedback until CMS is able to use data
acquired through the Quality Payment Program for performance feedback.
6. MIPS Final Score Methodology
By incentivizing quality and value for all MIPS eligible
clinicians, MIPS creates a new mechanism for calculating MIPS eligible
clinician payments. To implement this vision, we proposed a scoring
methodology that allows for accountability and alignment across the
performance categories and minimizes burden on MIPS eligible
clinicians. Further, we proposed a scoring methodology that is
meaningful, understandable and flexible for all MIPS eligible
clinicians. Our proposed methodology would allow for multiple pathways
to success with flexibility for the variety of practice types and
reporting options. First, we proposed multiple ways that MIPS eligible
clinicians may submit data to MIPS for the quality performance
category. Second, we provided greater flexibility in the reporting
requirements and scoring for MIPS. Third, we proposed that bonus points
would be available for reporting high priority measures and electronic
reporting of quality data. Recognizing that MIPS is a new program, we
also outlined proposals which we believed are operationally feasible
for us to implement in the transition year, while maintaining our
longer-term vision.
Section 1848(q) of the Act requires the Secretary to: (1) Develop a
methodology for assessing the total performance of each MIPS eligible
clinician according to performance standards for a performance period
for a year; (2) using the methodology, provide a final score for each
MIPS eligible clinician for each performance period; and (3) use the
final score of the MIPS eligible clinician for a performance period to
determine and apply a MIPS payment adjustment factor (and, as
applicable, an additional MIPS payment adjustment factor) to the MIPS
eligible clinician for the MIPS payment year. In section II.E.5 of the
proposed rule (81 FR 28181), we proposed the measures and activities
for each of the four MIPS performance categories: Quality, cost,
improvement activities, and advancing care information. This section of
the final rule with comment period discusses our proposals of the
performance standards for the measures and activities for each of the
four performance categories under section 1848(q)(3) of the Act, the
methodology for determining a score for each of the four performance
categories (referred to as a ``performance category score''), and the
methodology for determining a final score under section 1848(q)(5) of
the Act based on the scores determined for each of the four performance
categories. We proposed to
[[Page 77271]]
define the performance category score in section II.E.6 of the proposed
rule (81 FR 28247) as the assessment of each MIPS eligible clinician's
performance on the applicable measures and activities for a performance
category for a performance period based on the performance standards
for those measures and activities. In section II.E.7 of the proposed
rule (81 FR 28271), we included proposals for determining the MIPS
adjustments factors based on the final score.
As noted in section II.E.2 of the proposed rule (81 FR 28176), we
proposed to use multiple identifiers to allow MIPS eligible clinicians
to be measured as individuals, or collectively as part of a group or an
APM Entity group (an APM Entity participating in a MIPS APM). Further,
in section II.E.5.a.(2) of the proposed rule (81 FR 28182), we proposed
that data for all four MIPS performance categories would be submitted
using the same identifier (either individual or group) and that the
final score would be calculated using the same identifier. Section
II.E.5.h of the final rule with comment period describes our policies
in the event that an APM Entity scored through the APM scoring standard
fails reporting. The scoring proposals in section II.E.6 of the
proposed rule (81 FR 28247), would be applied in the same manner for
either individual submissions, proposed as TIN/NPI, or for the group
submissions using the TIN identifier. Unless otherwise noted, for
purposes of this section on scoring, the term ``MIPS eligible
clinician'' will refer to clinicians that are reporting and are scored
at either the individual or group level, but will not refer to
clinicians participating in an APM Entity scored through the APM
scoring standard.
Comments related to APM Entity group reporting and scoring for MIPS
eligible clinicians participating in MIPS APMs are summarized in
section II.E.5.h of this final rule with comment period. All eligible
clinicians that participate in APMs are considered MIPS eligible
clinicians unless and until they are determined to be either QPs or
Partial QPs who elect not to report under MIPS, and are excluded from
MIPS, or unless another MIPS exclusion applies. We finalize at Sec.
414.1380(d) that MIPS eligible clinicians in APM Entities that are
subject to the APM scoring standard are scored using the methodology
under Sec. 414.1370, as described in II.E.5.h of this final rule with
comment period.
MIPS eligible clinicians who participate in APMs that are not MIPS
APMs as defined in section II.E.5.h of the proposed rule (81 FR 28234)
would report to MIPS as an individual MIPS eligible clinician or group.
Unless otherwise specified, the proposals in section II.E.6.a of the
proposed rule (81 FR 28247) that relate to reporting and scoring of
measures and activities do not affect the APM scoring standard.
Our rationale for our scoring methodology is grounded in the
understanding that the MIPS scoring system has many components and
numerous moving parts. Thus, we believe it is necessary to set up key
parameters around scoring, including requiring MIPS eligible clinicians
to report at the individual or group level across all performance
categories and generally, to submit information for a performance
category using a single submission mechanism. Too many different
permutations would create additional complexities that could create
confusion amongst MIPS eligible clinicians as to what is or is not
allowed.
We have heard from stakeholders about our MIPS proposals. There are
some major concerns, particularly for the transition year (MIPS payment
year 2019), about program complexity, not having sufficient time to
understand the program before being measured, and potentially receiving
negative adjustments. Based on stakeholder feedback discussed in this
section, we are adjusting multiple parts of our proposed scoring
approach to enhance the likelihood MIPS eligible clinicians who may
have not had time to prepare can succeed under the program. We believe
that these adjustments will enable more robust and thorough engagement
with the program over time. Specifically, we have modified performance
standards for the performance categories used to evaluate the measures
and activities as well as the methodology to create a final score, and
we lowered the performance threshold. Thus, we have created a
transition year scoring methodology that does the following:
Provides a negative 4 percent payment adjustment to MIPS
eligible clinicians who do not submit any data to MIPS;
Ensures that MIPS eligible clinicians who submit data and
meet program requirements under any of the three performance categories
for which data must be submitted (quality, improvement activities, and
advancing care information) for at least a 90-day period,\20\ and have
low overall performance in the performance category or categories on
which they choose to report may receive a final score at or slightly
above the performance threshold and thus a neutral to small positive
adjustment, and
---------------------------------------------------------------------------
\20\ We note there are special circumstances in which MIPS
eligible clinicians may submit data for a period of less than 90
days and avoid a negative MIPS payment adjustment. For example, in
some circumstances, MIPS eligible clinicians may meet data
completeness criteria for certain quality measures in less than the
90-day period. Also, in instances where MIPS eligible clinicians do
not meet the data completeness criteria for quality measures
submitted, we will provide partial credit for submission of these
measures.
---------------------------------------------------------------------------
Ensures that MIPS eligible clinicians who submit data and
meet program requirements under each of the three performance
categories for which data must be submitted (quality, improvement
activities, and advancing care information) for at least a 90-day
period, and have average to high overall performance across the three
categories may receive a final score above the performance threshold
and thus a higher positive adjustment, and, for those MIPS eligible
clinicians who receive a final score at or above the additional
performance threshold, an additional positive adjustment.
a. Converting Measures and Activities Into Performance Category Scores
(1) Policies That Apply Across Multiple Performance Categories
The detailed policies for scoring the four performance categories
are described in section II.E.6.a of the proposed rule (81 FR 28248).
However, as the four performance categories collectively create a
single MIPS final score, there are some cross-cutting policies that we
proposed to apply to multiple performance categories.
(a) Performance Standards
Section 1848(q)(3)(A) of the Act requires the Secretary to
establish performance standards for the measures and activities in the
four MIPS performance categories. Section 1848(q)(3)(B) of the Act
requires the Secretary, in establishing performance standards for
measures and activities for the four MIPS performance categories, to
consider historical performance standards, improvement, and the
opportunity for continued improvement. We proposed to define the term,
performance standards, at Sec. 414.1305 as the level of performance
and methodology that the MIPS eligible clinician is assessed on for a
MIPS performance period at the measures and activities level for all
MIPS performance categories. We defined the term, MIPS payment year, at
Sec. 414.1305 as the calendar year in which MIPS payment adjustments
are applied. Performance
[[Page 77272]]
standards for each performance category were proposed in more detail in
section II.E.6 of the proposed rule (81 FR 28247). MIPS eligible
clinicians would know the actual performance standards in advance of
the performance period, when possible. Further, each performance
category is unified under the principle that MIPS eligible clinicians
would know, in advance of the performance period, the methodology for
determining the performance standards and the methodology that would be
used to score their performance. Table 16 of the proposed rule (81 FR
28249), summarizes the proposed performance standards.
The following is a summary of the comments we received regarding
our performance standard proposals.
Comment: Multiple commenters were concerned that the performance
standards may not be available in advance of the performance period, or
that the performance standards methodologies would only be available
``when possible''. Commenters requested that CMS publish the
performance standards with as much advance notice as possible so that
MIPS eligible clinicians will be able to plan and know the standards
against which they will be measured.
Response: The performance standard methodology will be known in
advance so that MIPS eligible clinicians can understand how they will
be measured. For improvement activities and advancing care information,
the performance standards are known prior to the performance period and
are delineated in this final rule with comment period. For the quality
performance category, benchmarks are known prior to the performance
period when benchmarks are based on the baseline period. For new
measures in the quality performance category, for quality measures
where there is no historical baseline data to build the benchmarks, and
for measures in the cost performance category, the benchmarks will be
based on performance period data and therefore, will not be known prior
to the performance period.
When performance standards for certain quality measures are not
known prior to the performance period, we are implementing protections
for MIPS eligible clinicians who ultimately perform poorly on these
measures. For example, as discussed in section II.E.6.a.(2)(b) of this
final rule with comment period, we have added quality performance
floors for the transition year to protect MIPS eligible clinicians
against unexpectedly low performance scores. For cost measures, the
benchmarks will be based on performance period data and cannot be
published in advance. However, we do plan to provide feedback on
performance so that MIPS eligible clinicians can understand their
performance and improve in subsequent years. We will provide feedback
before the performance period based on prior period data, illustrating
how MIPS eligible clinicians might perform on these measures and we
will provide feedback after the performance period based on performance
period data, illustrating how MIPS eligible clinicians actually
performed on these measures.
In addition, as discussed in section II.E.5.e.(2) of this final
rule with comment period, we are also lowering the weight of the cost
performance category to 0 percent of the final score for the transition
year.
Finally, as discussed in section II.E.7.c of this final rule with
comment period, we are lowering the performance threshold for this
transition year.
Comment: One commenter stated that the government should not decide
on definitions of quality and financial rewards or penalties for
meeting such standards.
Response: Section 1848(q)(3)(A) of the Act requires the Secretary
to establish performance standards for the measures and activities in
the four MIPS performance categories, including quality, and section
1848(q)(1)(A) of the Act generally requires us to develop a scoring
methodology for assessing the total performance of each MIPS eligible
clinician according to those standards and to use such scores to
determine and apply MIPS payment adjustment factors and, as applicable,
additional MIPS adjustments. We believe our proposals are consistent
with these statutory requirements.
After consideration of the comments, we are finalizing the term,
performance standards, at Sec. 414.1305 as the level of performance
and methodology that the MIPS eligible clinician is assessed on for a
MIPS performance period at the measures and activities level for all
MIPS performance categories. We are finalizing at Sec. 414.1380(a)
that MIPS eligible clinicians are scored under MIPS based on their
performance on measures and activities in four performance categories.
MIPS eligible clinicians are scored against performance standards for
each performance category and receive a final score, composed of their
scores on individual measures and activities, and calculated according
to the final score methodology. We are also finalizing at Sec.
414.1380(a)(1) that measures and activities in the four performance
categories are scored against performance standards.
MIPS eligible clinicians will know, in advance of the performance
period, the methodology for determining the performance standards and
the methodology that will be used to score their performance. MIPS
eligible clinicians will know the numerical performance standards in
the quality performance category in advance of the performance period,
when possible. A summary of the performance standards per performance
category is provided in Table 15. As discussed in section II.E.6.a.(2)
of this final rule with comment period, we are finalizing at Sec.
414.1380(a)(1)(i) that for the quality performance category, measures
are scored between zero and 10 points. Performance is measured against
benchmarks. Bonus points are available for both submitting specific
types of measures and submitting measures using end-to-end electronic
reporting. As discussed in section II.E.6.a.(3) of this final rule with
comment period, we are finalizing at Sec. 414.1380(a)(1)(ii) that for
the cost performance category, that measures are scored between one and
10 points. Performance is also measured against benchmarks. As
discussed in section II.E.6.a.(4), we are also finalizing at Sec.
414.1380(a)(1)(iii) that for the improvement activities performance
category each improvement activity is worth a certain number of points.
The points for each reported activity are summed and scored against a
total potential performance category score of 40 points as discussed in
section. As discussed in section II.E.6.a.(5) of this final rule with
comment period, we are finalizing at Sec. 414.1380(a)(1)(iv), that for
the advancing care information performance category, the performance
category score is the sum of a base score, performance score, and bonus
score.
As discussed in section II.E.6.a.(2) of this final rule with
comment period, we are making changes to the quality performance
category in response to comments received and are providing a minimum
floor for all submitted measures to provide additional safeguards in
the transition year. As discussed in section II.E.6.a.(4) of this final
rule with comment period, we are making a minor modification to the
improvement activities standard to provide additional clarification on
improvement activities scoring and to align with comments received.
Further, as discussed in section II.E.5.f of this final rule with
comment period, we are making additional changes to the advancing care
information performance category to align with comments
[[Page 77273]]
received. We are also finalizing our definition of performance category
score as defined in Sec. 414.1305 as the assessment of each MIPS
eligible clinician's performance on the applicable measures and
activities for a performance category for a performance period based on
the performance standards for those measures and activities.
Additionally, we are finalizing the definition of the term, MIPS
payment year with a modification for further consistency with the
statute. Specifically, MIPS payment year is defined at Sec. 414.1305
as a calendar year in which the MIPS payment adjustment factor, and if
applicable the additional MIPS payment adjustment factor, are applied
to Medicare Part B payments.
Table 15--Performance Category Performance Standards for the 2017
Performance Period
------------------------------------------------------------------------
Proposed performance Final performance
Performance category standard standard
------------------------------------------------------------------------
Quality..................... Measure benchmarks Measure benchmarks
to assign points, to assign points,
plus bonus points. plus bonus points
with a minimum
floor for all
measures.
Cost........................ Measure benchmarks Measure benchmarks
to assign points. to assign points.
Improvement Activities...... Based on Based on
participation in participation in
activities that activities listed
align with the in Table H of the
patient-centered Appendix final rule
medical home. with comment
period.
Number of points Based on
from reported participation as a
activities compared patient-centered
against a highest medical home or
potential score of comparable
60 points. specialty practice.
Based on Based on
participation in participation as an
the CMS study on APM.
improvement
activities and
measurement;.
Number of points
from reported
activities or
credit from
participation in an
APM compared
against a highest
potential score of
40 points.
Advancing Care Information.. Based on Based on
participation (base participation (base
score) and score) and
performance performance
(performance score). (performance
score).
Base score: Achieved Base score: Achieved
by meeting the by meeting the
Protect Patient Protect Patient
Health Information Health Information
objective and objective and
reporting the reporting the
numerator (of at numerator (of at
least one) and least one) and
denominator or yes/ denominator or yes/
no statement as no statement as
applicable (only a applicable (only a
yes statement would yes statement would
qualify for credit qualify for credit
under the base under the base
score) for each score) for each
required measure. required measure.
Performance score: Performance score:
Decile scale for Between zero and 10
additional or 20 percent per
achievement on measure (as
measures above the designated by CMS)
base score based upon measure
requirements, plus reporting rate,
1 bonus point. plus up to 15
percent bonus
score.
------------------------------------------------------------------------
(b) Unified Scoring System
Section 1848(q)(5)(A) of the Act requires the Secretary to develop
a methodology for assessing the total performance of each MIPS eligible
clinician according to performance standards for applicable measures
and activities in each performance category applicable to the MIPS
eligible clinician for a performance period. While MIPS has four
different performance categories, we proposed a unified scoring system
that enables MIPS eligible clinicians, beneficiaries, and stakeholders
to understand what is required for a strong performance in MIPS while
being consistent with statutory requirements. We sought to keep the
scoring as simple as possible, while providing flexibility for the
variety of practice types and reporting options. We proposed to
incorporate the following characteristics into the scoring
methodologies for each of the four MIPS performance categories:
For the quality and cost performance categories, all
measures would be converted to a 10-point scoring system which provides
a framework to universally compare different types of measures across
different types of MIPS eligible clinicians. We noted that a similar
point framework has been successfully implemented in several other CMS
quality programs including the Hospital VBP Program.
The measure and activity performance standards would be
published, where feasible, before the performance period begins, so
that MIPS eligible clinicians can track their performance during the
performance period. This transparency would make the information more
actionable to MIPS eligible clinicians.
Unlike the PQRS or the EHR Incentive Program, we proposed
that we generally would not include ``all-or-nothing'' reporting
requirements for MIPS. The methodology would score measures and
activities that meet certain standards defined in section II.E.5 of the
proposed rule (81 FR 28181 through 28247) and this section of the final
rule with comment period. However, section 1848(q)(5)(B)(i) of the Act
provides that under the MIPS scoring methodology, MIPS eligible
clinicians who fail to report on an applicable measure or activity that
is required to be reported shall be treated as receiving the lowest
possible score for the measure or activity. Therefore, MIPS eligible
clinicians that fail to report specific measures or activities would
receive zero points for each required measure or activity that they do
not submit to MIPS.
The scoring system would ensure sufficient reliability and
validity by only scoring the measures that meet certain standards (such
as the required case minimum). The standards are described later in
this section.
The scoring proposals provide incentives for MIPS eligible
clinicians to invest and focus on certain measures and activities that
meet high priority policy goals such as improving beneficiary health,
improving care coordination through health information exchange, or
encouraging APM Entity participation.
Performance at any level would receive points towards the
performance category scores.
We noted that we anticipated scoring in future years would continue
to align and simplify. We requested comment on the characteristics of
the proposed unified scoring system.
We also proposed at Sec. 414.1325 that MIPS eligible clinicians
and groups may elect to submit information via multiple mechanisms;
however, they must use the same identifier for all performance
categories and they may only use one
[[Page 77274]]
submission mechanism per performance category. For example, a MIPS
eligible clinician could use one submission mechanism for sending
quality measures and another for sending improvement activities data,
but a MIPS eligible clinician could not use two submission mechanisms
for a single performance category, such as submitting three quality
measures via claims and three quality measures via registry. We did
intend to allow flexibility, for example, in rare situations where a
MIPS eligible clinician submits data for a performance category via
multiple submission mechanisms (for example, submits data for the
quality performance category through a registry and QCDR), we would
score all the options (such as scoring the quality performance category
with data from a registry, and also scoring the quality performance
category with data from a QCDR) and use the highest performance
category score for the MIPS eligible clinician final score. We would
not however, combine the submission mechanisms to calculate an
aggregated performance category score.
In carrying out MIPS, section 1848(q)(1)(E) of the Act requires the
Secretary to encourage the use of QCDRs under section 1848(m)(3)(E) of
the Act. In addition, section 1848(q)(5)(B)(ii) of the Act provides
that under the methodology for assessing the total performance of each
MIPS eligible clinician, the Secretary shall encourage MIPS eligible
clinicians to report on applicable measures under the quality
performance category through the use of CEHRT and QCDRs. To encourage
the use of QCDRs, we proposed opportunities for QCDRs to report new and
innovative quality measures. In addition, several improvement
activities emphasize QCDR participation. Finally, we proposed under
section II.E.5.a of the proposed rule (81 FR 28181) for QCDRs to be
able to submit data on all MIPS performance categories. We believe
these flexible options would allow MIPS eligible clinicians to meet the
submission criteria for MIPS in a low burden manner, which in turn may
positively affect their final score. We further believe these
flexibilities encourage use of end-to-end electronic data extraction
and submission where feasible today, and foster further development of
methods that avoid manual data collection where automation is a valid,
reliable option and that promote the goal of capturing data once and
re-using it for multiple appropriate purposes.
In addition, section 1848(q)(5)(D) of the Act lays out the
requirements for incorporating performance improvement into the MIPS
scoring methodology beginning with the second MIPS performance period,
if data sufficient to measure improvement is available. Section
1848(q)(5)(D)(ii) of the Act also provides that achievement may be
weighted higher than improvement. Stated generally, we consider
achievement to mean how a MIPS eligible clinician performs relative to
performance standards, and improvement to mean how a MIPS eligible
clinician performs compared to the MIPS eligible clinician's own
previous performance on measures and activities in a performance
category. Improvement would not be scored for the transition year of
MIPS, but we solicited comment on how best to incorporate improvement
scoring for all performance categories.
The following is a summary of the comments we received regarding
our proposal for a unified scoring system.
Comment: Some commenters expressed support for the unified scoring
system and agreed with having a unified and simplified scoring system,
but some believed the proposed scoring methodology for MIPS is
confusing and requires more alignment across performance categories.
Commenters noted that physicians will not be able to understand how CMS
calculated their score and would not know if appeals to CMS would be
needed in order to correct information or plan for the future. Several
commenters requested one single score, or fewer than four separate
performance category scores, rather than aggregating individual scores
for the four performance categories. Others noted the need for feedback
prior to scoring. Others recommended simplifying the scoring system by
aligning it across performance categories, and one commenter expressed
concern about the total number of measures and activities across the
four performance categories adding complexity to the scoring.
Response: Despite our efforts to create a transparent and
standardized scoring system, we understand that some stakeholders may
be concerned about the scoring complexity and may want more alignment
across categories. We also understand stakeholders' requests for
feedback prior to scoring. Several of our core objectives for MIPS are
to promote program understanding and participation through customized
communication, education, outreach and support, and to improve data and
information sharing to provide accurate, timely, and actionable
feedback to MIPS eligible clinicians. Prior to receiving a payment
adjustment, MIPS eligible clinicians will receive timely confidential
feedback on their program performance as discussed in section II.E.8.a
of this final rule with comment period.
We have simplified the overall scoring approach for MIPS eligible
clinicians in the transition year. Under this scoring approach, MIPS
eligible clinicians who report measures/activities with minimal levels
of performance will not be subject to negative payment adjustments if
their final score is at or above the performance threshold. We believe
having scores for individual performance categories aligns with the
statute; however, we have provided numerous examples within section
II.E.6.a.(2)(g) of this final rule with comment period to provide
transparency as to how we will calculate MIPS eligible clinicians'
scores and help MIPS eligible clinicians to understand how to succeed
in the program. Further, we will continue to provide additional
materials to create a transparent and standardized scoring system.
Comment: Commenters expressed concern that the unified scoring
system may not allow consumers and payers to make meaningful
comparisons across MIPS eligible clinicians. The commenters' reasons
for concern include the varied reporting options and different score
denominators.
Response: We have taken a patient-centered approach toward
implementing our unified scoring system, which does allow for special
circumstances for certain types of practices such as non-patient facing
professionals, as well as small practices, rural practices and those in
HPSA geographic areas. We believe our approach balances the interests
of patients and payers while also providing flexibility for the variety
of MIPS eligible clinician practices and encourages more collaboration
across practice types.
Comment: Multiple commenters requested clarification on evaluating
group performance within each of the four performance categories;
specifically, whether it is CMS's intent to evaluate each individual
within a group and somehow aggregate that performance into a composite
group score or to evaluate the group as a single entity.
Response: Evaluation of group practices and individual practices is
discussed under each performance category in sections II.E.5.b.,
II.E.5.e., II.E.5.f., and II.E.5.g. of this final rule with comment
period.
Comment: One commenter requested that CMS explain the benefit of
[[Page 77275]]
reporting via QCDR and why this method is emphasized in the proposed
rule.
Response: QCDRs have more flexibility to collect data from
different data sources and to rapidly develop innovative measures that
can be incorporated into MIPS. Therefore, we believe that QCDRs provide
an opportunity for innovative measurement that is both relevant to MIPS
eligible clinicians and beneficial to Medicare beneficiaries. In
addition, section 1848(q)(1)(E) of the Act requires us to encourage the
use of QCDRs.
Comment: Some commenters supported the removal of ``all-or-
nothing'' scoring. One commenter encouraged CMS to create more partial-
scoring opportunities.
Response: We appreciate the comment on the removal of ``all-or-
nothing'' scoring. We will take these comments into consideration when
considering additional recommendations for partial credit in future
rulemaking
Comment: One commenter expressed concern that CMS cannot measure
physician ``performance'' accurately. The commenter cited multiple
sources that supported this statement.
Response: We recognize the challenges in measuring clinician
performance and continue to work with stakeholders to address concerns.
After consideration of these comments, we are finalizing all of our
policies related to unified scoring as proposed, except we are
modifying our proposed policy on scoring quality measures.
We list below all policies we are finalizing related to our
proposed unified scoring system.
For the quality and cost performance categories, all
measures will be converted to a 10-point scoring system which provides
a framework to universally compare different types of measures across
different types of MIPS eligible clinicians.
The measure and activity performance standards will be
published, where feasible, before the performance period begins, so
that MIPS eligible clinicians can track their performance during the
performance period.
MIPS eligible clinicians who fail to report specific
measures or activities would receive zero points for each required
measure or activity that they do not submit to MIPS.
The scoring policies provide incentives for MIPS eligible
clinicians to invest and focus on certain measures and activities that
meet high priority policy goals such as improving beneficiary health,
improving care coordination through health information exchange, or
encouraging APM Entity participation.
Performance at any level would receive points towards the
performance category scores.
We also are finalizing at Sec. 414.1325 that MIPS eligible
clinicians and groups may elect to submit information via multiple
mechanisms; however, they must use the same identifier for all
performance categories and they may only use one submission mechanism
per performance category. For example, a MIPS eligible clinician could
use one submission mechanism for sending quality measures and another
for sending improvement activities data, but a MIPS eligible clinician
could not use two submission mechanisms for a single performance
category, such as submitting three quality measures via claims and
three quality measures via registry. We did intend to allow
flexibility, for example, in rare situations where a MIPS eligible
clinician submits data for a performance category via multiple
submission mechanisms (for example, submits data for the quality
performance category through a registry and QCDR), we will score all
the options (such as scoring the quality performance category with data
from a registry, and also scoring the quality performance category with
data from a QCDR) and use the highest performance category score for
the MIPS eligible clinician final score. We will not however, combine
the submission mechanisms to calculate an aggregated performance
category score. The one exception to this policy is CAHPS for MIPS,
which is submitted using a CMS-approved survey vendor. CAHPS for MIPS
can be scored in conjunction with other submission mechanisms.
With regard to the above policy, we note that some submission
mechanisms allow for multiple measure types, such as a QCDR could
submit data on behalf of an eligible clinician for a mixture of MIPS
eCQMs and non-MIPS measures. However, we recognize that the scoring of
only one submission mechanism in the transition year may influence
which measures a MIPS eligible clinician selects to submit for the
performance period. For example, a MIPS eligible clinician or group may
only be able to report a limited number of measures relevant to their
practice through a given submission mechanism, and therefore they may
elect to choose a different submission mechanism through which a more
robust set of measures relevant to their practice is available. We are
seeking comment on whether we should modify this policy to allow
combined scoring on all measures submitted across multiple submission
mechanisms within a performance category. Specifically, we are seeking
comment on the following questions:
Would offering a combined performance category score
across submissions mechanisms encourage electronic reporting and the
development of more measures that effectively use highly reliable,
accurate clinical data routinely captured by CEHRT in the normal course
of delivering safe and effective care? If so, are there particular
approaches to the performance category score combination that would
provide more encouragement than others?
What approach should be used to combine the scores for
quality measures from multiple submission mechanisms into a single
aggregate score for the quality performance category? For example,
should CMS offer a weighted average score on quality measures submitted
through two or more different mechanisms? Or take the highest scores
for any submitted measure regardless of how the measure is submitted?
What steps should CMS and ONC consider taking to increase
clinician and consumer confidence in the reliability of the technology
used to extract, aggregate, and submit electronic quality measurement
data to CMS?
What enhancements to submission mechanisms or scoring
methodologies for future years might reinforce incentives to encourage
electronic reporting and improve reliability and comparability of CQMs
reported by different electronic mechanisms?
We are modifying our proposed policy on scoring quality measures.
Specifically, as discussed in section II.E.6.a.(2)(b) of this final
rule with comment period, for the transition year, we are providing a
global minimum floor of 3 points for all quality measures submitted. As
discussed in section II.E.6.a.(2)(c) of the final rule with comment
period, we are also modifying our proposed policy in which we would
only score the measures that meet certain standards (such as required
case minimum). For the transition year, we are automatically providing
3 points for quality measures that are submitted, regardless of whether
they lack a benchmark or do not meet the case minimum or data
completeness requirements. Finally, as discussed in section II.E.6.h of
this final rule with comment period, we intend to propose options for
scoring based on improvement through future rulemaking.
[[Page 77276]]
Various policies related to scoring the four performance categories
are finalized at Sec. 414.1380(b) and described in more detail in
sections II.E.6.a.(2), II.E.6.a.(3), II.E.6.a.(4), and II.E.5.g.(6) of
this final rule with comment period.
(c) Baseline Period
In other Medicare quality programs, such as the Hospital VBP
Program, we have adopted a baseline period that occurs prior to the
performance period for a program year to measure improvement and to
establish performance standards. We view the MIPS Program as
necessitating a similar baseline period for the quality performance
category. We intend to establish a baseline period for each performance
period for a MIPS payment year to measure improvement for the quality
performance category and to enable us to calculate performance
standards that we can establish and announce prior to the performance
period. As with the Hospital VBP Program, we intend to adopt one
baseline period for each MIPS payment year that is as close as possible
in duration to the performance period specified for a MIPS payment
year. In addition, evaluating performance compared to a baseline period
may enable other payers to incorporate MIPS benchmarks into their
programs. For each MIPS payment year, we proposed at section
II.E.6.a.(1)(c) of the proposed rule (81 FR 28250) that the baseline
period would be the 12-month calendar year that is 2 years prior to the
performance period for the MIPS payment year. Therefore, for the first
MIPS payment year (CY 2019 payment adjustments), for the quality
performance category, we proposed that the baseline period would be CY
2015 which is 2 years prior to the proposed CY 2017 performance period.
As discussed in section II.E.6.a.(2)(a) of the proposed rule (81 FR
28251), we proposed to use performance in the baseline period to set
benchmarks for the quality performance category, with the exception of
new measures for which we would set the benchmarks using performance in
the performance period and an exception for CMS Web Interface
reporters, which will use the benchmarks associated with Shared Savings
Program. For the cost performance category, we proposed to set the
benchmarks using performance in the performance period and not the
baseline period, as discussed in section II.E.6.a.(3) of the proposed
rule (81 FR 28259). For the cost performance category, we also made an
alternative proposal to set the benchmarks using performance in the
baseline period. We proposed to define the term ``measure benchmark''
for the quality and cost performance categories (81 FR 28250) as the
level of performance that the MIPS eligible clinician will be assessed
on for a performance period at the measures and activities level.
The following is a summary of the comments we received regarding
our proposal to define the baseline period.
Comment: One commenter expressed concern that baseline scoring may
be misaligned when using benchmarks from 1 year for the cost
performance category and a different year for measures in the quality
performance category. Multiple commenters believe all categories should
use the same year to determine benchmarks. Some commenters requested
that CMS measure MIPS eligible clinicians as close as possible to the
performance period, ideally, less than 2 years from the performance
period. Others noted concern about the ability of a clinician to
correct actions with 2-year old data.
Response: Ideally, we would like to have data sources for our
benchmarks aligned across the quality and cost performance categories.
However, we have purposefully chosen different periods for the quality
and cost performance categories. We proposed to use the baseline period
for benchmarks for the quality performance category so that MIPS
eligible clinicians can know quality performance category benchmarks in
advance; however, we believe there are disadvantages to benchmarking
cost measures to a previous year. For example, development of a new
technology or a change in payment policy could result in a significant
change in typical cost from year to year. Therefore, for more accurate
data, it is better to build cost benchmarks from performance period
data than the baseline period. We believe there is more value in the
advance notice for quality performance measures so that MIPS eligible
clinicians can benchmark themselves for quality measures when
historical data is available. In contrast, for the cost performance
category, we believe it is more beneficial to base benchmarks on the
performance period. After considering comments, we are finalizing that
the baseline period will be the 12-month calendar year that is 2 years
prior to the performance period for the MIPS payment year. We believe
that 2 years is the most recent data we can use to develop benchmarks
prior to the performance period.
We will use performance in the baseline period to set benchmarks
for the quality performance category, with the exception of new quality
measures, or quality measures that lack historical data, for which we
would set the benchmarks using performance in the performance period,
and an exception for CMS Web Interface reporters which we will use the
benchmarks associated with the Shared Savings Program. For the cost
performance category, we will set the benchmarks using performance in
the performance period and not the baseline period. We are defining the
term ``measure benchmark'' for the quality and cost performance
categories at Sec. 414.1305 as the level of performance that the MIPS
eligible clinician is assessed on for a specific performance period at
the measures and activities level.
(2) Scoring the Quality Performance Category
In section II.E.5.b.(3) of the proposed rule, we proposed multiple
ways that MIPS eligible clinicians may submit data for the quality
performance category to MIPS; however, we proposed that the scoring
methodology would be consistent regardless of how the data is
submitted. In summary, we proposed at Sec. 414.1380(b)(1) to assign 1-
10 points to each measure based on how a MIPS eligible clinician's
performance compares to benchmarks. Measures must have the required
case minimum to be scored. We proposed that if a MIPS eligible
clinician fails to submit a measure required under the quality
performance category criteria, then the MIPS eligible clinician would
receive zero points for that measure. We proposed that MIPS eligible
clinicians would not receive zero points if the required measure is
submitted (meeting the data completeness criteria as defined in section
II.E.5.b.(3)(b) of the proposed rule (81 FR 28188) but is unable to be
scored for any of the reasons listed in section II.E.6.a.(2) of the
proposed rule (81 FR 28250), such as not meeting the required case
minimum or a measure lacks a benchmark. We described in section
II.E.6.a.(2)(d) of the proposed rule (81 FR 28254), examples of how
points would be allocated and how to compute the overall quality
performance category score under these scenarios. Bonus points would be
available for reporting high priority measures, defined as outcome,
appropriate use, efficiency, care coordination, patient safety, and
patient experience measures.
As discussed in section II.E.6.a.(2)(g) of the proposed rule (81 FR
28256), the quality performance category score would be the sum of all
the points assigned for the scored measures required for the quality
performance category plus the bonus points (subject
[[Page 77277]]
to the cap) divided by the sum of total possible points. Examples of
the calculations were provided in the proposed rule (81 FR 28256).
In section II.E.6.b of the proposed rule (81 FR 28269), we
discussed how we would score MIPS eligible clinicians who do not have
any scored measures in the quality performance category. The details of
the proposed scoring methodology for the quality performance category
are described below.
(a) Quality Measure Benchmarks
For the quality performance category, we proposed at section
II.E.6.a.(2)(a) of the proposed rule (81 FR 28251) that the performance
standard is measure-specific benchmarks. Benchmarks would be determined
based on performance on measures in the baseline period. For quality
performance category measures for which there are baseline period data,
we proposed to calculate an array of measure benchmarks based on
performance during the baseline period, breaking baseline period
measure performance into deciles. Then, a MIPS eligible clinician's
actual measure performance during the performance period would be
evaluated to determine the number of points that should be assigned
based on where the actual measure performance falls within these
baseline period benchmarks. If a measure does not have baseline period
information (for example, new measures), or if the measure
specifications for the baseline period differ substantially from the
performance period (for example, when the measure requirements change
due to updated clinical guidelines), then we proposed to determine the
array of benchmarks based on performance on the measure in the
performance period, breaking the actual performance on the measure into
deciles. In addition, we proposed to create separate benchmarks for
submission mechanisms that do not have comparable measure
specifications. For example, several eCQMs have specifications that are
different than the corresponding measure from registries. We proposed
to develop separate benchmarks for EHR submission mechanisms, claims
submission mechanisms, and QCDRs and qualified registry submission
mechanisms.
For CMS Web Interface reporting, we proposed to use the benchmarks
from the Shared Savings Program as described at https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Quality-Measures-Standards.html, which were finalized in previous
rulemaking.\21\ We proposed to adopt the Shared Savings Program
performance year benchmarks for measures that are reported through the
CMS Web Interface for the MIPS performance period, but proposed to
apply the MIPS method of assigning 1 to 10 points to each measure as an
alternative to calculating separate MIPS benchmarks. Because the Shared
Savings Program does not publicly post or use benchmarks below the 30th
percentile, we proposed to assign all scores below the 30th percentile
a value of 2 points, which is consistent with the mid-cluster approach
we proposed for topped out measures. We believed using the same
benchmarks for MIPS and the Shared Savings Program for the CMS Web
Interface measures would be appropriate because, as is discussed in the
proposed rule (81 FR 28237 through 28243), we proposed to use the MIPS
benchmarks to score MIPS eligible clinicians in the Shared Savings
Program and the Next Generation ACO Model on the quality performance
category and believe it is important to not have conflicting
benchmarks. We would post the MIPS CMS Web Interface benchmarks with
the other MIPS benchmarks.
---------------------------------------------------------------------------
\21\ Shared Saving Program quality performance benchmarks and
scoring methodology regulations: Medicare Program; Medicare Shared
Savings Program: Accountable Care Organizations; Final Rule, 76 FR
67802 (Nov. 2, 2011). Medicare Program; Revisions to Payment
Policies under the Physician Fee Schedule, Clinical Laboratory Fee
Schedule & Other Revisions to Part B for CY 2014; Final Rule, 78 FR
74230 (Dec. 10, 2013). Medicare Program; Revisions to Payment
Policies under the Physician Fee Schedule, Clinical Laboratory Fee
Schedule & Other Revisions to Part B for CY 2015; Final Rule, 79 FR
67907 (Nov. 13, 2014). Medicare Program; Revisions to Payment
Policies under the Physician Fee Schedule, Clinical Laboratory Fee
Schedule & Other Revisions to Part B for CY 2016; Final Rule, 80 FR
71263 (Nov. 16, 2015).
---------------------------------------------------------------------------
As an alternative approach, we considered creating CMS Web
Interface specific benchmarks for MIPS instead of using the Shared
Savings Program benchmarks. This alternative approach for MIPS
benchmarks would be restricted to CMS Web Interface reporters and would
not include other MIPS data submission methods or other data sources
which are currently used to create the Shared Saving Program
benchmarks. This alternative would also apply the topped out cluster
approach if any measures are topped out. While we see benefit in having
CMS Web Interface methodology match the other MIPS benchmarks, we are
also concerned about the Shared Saving Program and the Next Generation
ACO Model participants having conflicting benchmark data. We requested
comments on building CMS Web Interface specific benchmarks.
We proposed that all MIPS eligible clinicians, regardless of
whether they report as an individual or group, and regardless of
specialty, that submit data using the same submission mechanism would
be included in the same benchmark. We proposed to unify the calculation
of the benchmark by using the same approach as the VM of weighting the
performance rate of each MIPS eligible clinician and group submitting
data on the quality measure by the number of beneficiaries used to
calculate the performance rate so that group performance is weighted
appropriately (77 FR 69321 through 69322). We would also include data
from APM Entity submissions in the benchmark but would not score APM
Entities using the MIPS scoring methodology. For APM scoring, we refer
to section II.E.5.h. of the proposed rule (81 FR 28234).
To ensure that we have robust benchmarks, we proposed that each
benchmark must have a minimum of 20 MIPS eligible clinicians who
reported the measure meeting the data completeness requirement defined
in section II.E.5.b.(3) of the proposed rule (81 FR 28185), as well as
meeting the required case minimum criteria for scoring that is defined
later in this section. We proposed a minimum of 20 because, as
discussed below, our benchmarking methodology relies on assigning
points based on decile distributions with decimals. A decile
distribution requires at least 10 observations. We doubled the
requirement to 20 so that we would be able to assign decimal point
values and minimize cliffs between deciles. We did not want to increase
the benchmark sample size requirement due to concerns that an increase
could limit the number of measures with benchmarks.
We also proposed that MIPS eligible clinicians who report measures
with a performance rate of 0 percent would not be included in the
benchmarks. In our initial analysis, we identified some measures that
had a large cluster of eligible clinicians with a 0 percent performance
rate. We were concerned that the 0 percent performance rate represents
clinicians who are not actively engaging in that measurement activity.
We did not want to inappropriately skew the distribution. We solicited
comment on whether or not to include 0 percent performance in the
benchmark.
We proposed at Sec. 414.1380(b)(1)(i) to base the benchmarks on
performance in the baseline period when possible. We
[[Page 77278]]
proposed to publish the numerical benchmarks when possible, prior to
the start of the performance period. In those cases, where we do not
have comparable data from the baseline period, we proposed to use
information from the performance period to establish benchmarks. While
the benchmark methodology would be established in a final rule in
advance of the performance period, we proposed that the actual
numerical benchmarks would not be published until after the performance
period for quality measures that do not have comparable data from the
baseline period. The methodology for creating the benchmarks was
discussed in the proposed rule (81 FR 28251).
We considered not scoring measures that either are new to the MIPS
program or do not have a historical benchmark based on performance in
the baseline period. This policy would be consistent with the VM policy
in which we do not score measures that have no benchmark (77 FR 69322).
However, in the proposed rule (81 FR 28252), we expressed concerned
that such a policy could stifle reporting on innovative new measures
because it would take several years for the measure to be incorporated
into the performance category score. We also believed that any issues
related to reporting a new measure would not disproportionately affect
the relative performance between MIPS eligible clinicians.
We also considered a variation on the scoring methodology that
would provide a floor for a new MIPS measure. Under this variation, if
a MIPS eligible clinician reports a new measure under the quality
performance category, the MIPS eligible clinician would not score lower
than 3 points for that measure. This would encourage reporting on new
measures, but also prevent MIPS eligible clinicians from receiving the
lowest scores for a new measure, while still measuring variable
performance. Finally, we also considered lowering the weight of a new
measure, so that new measures would contribute relatively less to the
score compared to other measures. In the end, we did not propose the
alternatives we considered, because we wanted to encourage adoption and
measured performance of new measures, however, we did request comment
on these alternatives, including comments on what the lowest score
should be for MIPS eligible clinicians who report a new measure under
the quality performance category and protections against potential
gaming related to reporting of new measures only. We also sought
comments on alternative methodologies for scoring new measures under
the quality performance category, which would assure equity in scoring
between the methodology for measures for which there is baseline period
data and for new measures which do not have baseline period data
available.
Finally, we clarified that some PQRS reporting mechanisms have
limited experience with all-payer data. For example, under PQRS, all-
payer data was permitted only when reporting via registries for measure
groups; reporting via registries for individual measures was restricted
to Medicare only. Under MIPS, however, we proposed to have more robust
data submissions, as described in section II.E.5.b.(3) of the proposed
rule (81 FR 28188). We recognized that comparing all-payer performance
to a benchmark that is built, in part, on Medicare data is a limitation
and noted we would monitor the benchmarks to see if we need to develop
separate benchmarks. We also noted that this data issue would resolve
in a year or two, as new MIPS data becomes the historical benchmark
data in future years.
The following is a summary of the comments we received regarding
our proposals for quality measure benchmarks.
Comment: Commenters generally supported our proposed approach: some
commenters supported the establishment of separate benchmarks for
submission mechanisms that do not have comparable measure
specifications, and another supported using national benchmarks and
linear-based scoring in the MIPS performance scoring methodology.
Response: We agree with commenters and are finalizing at Sec.
414.1380(b)(1)(iii) the establishment of separate benchmarks for the
following submission mechanisms: EHR submission options; QCDR and
qualified registry submission options; claims submission options; CMS
Web Interface submission options; CMS-approved survey vendor for CAHPS
for MIPS submission options; and administrative claims submission
options. We note that the administrative claims benchmarks are for
measures derived from claims data, such as the readmission measure. As
discussed below, the CMS Web Interface submission benchmarks will be
the same as the Shared Savings Program benchmarks for the corresponding
Shared Savings Program performance period. We note that assigning
separate benchmarks in this manner creates opportunities for clinicians
to achieve higher quality scores by selectively choosing submission
mechanisms; as discussed in section II.E.5.a.(2) in this final rule
with comment period, we intend to monitor for such activity and to
report back on any findings from our monitoring in future rulemaking.
Comment: Commenters requested that CMS provide each measure's
benchmarks in advance, with one recommending that CMS do so in the
final rule and in future proposed rules so that MIPS eligible
clinicians know their target goals or, alternatively, that CMS hold a
listening session for input on benchmarks for each measure. The
commenters stated that they did not want to be held accountable for
performance if benchmarks cannot be provided in advance. One commenter
noted that it would be difficult to gauge performance and areas for
improvement since benchmarks would not be released in time and real
time feedback is needed.
Response: We agree with commenters that quality benchmarks should
be made public and should be known in advance when possible so that
MIPS eligible clinicians can understand how they will be measured. We
are finalizing that measure benchmarks are based on historical
performance for the measures based on a baseline period. Those
benchmarks will be known in advance of the performance period. We
finalize this approach with one exception. The CMS Web Interface will
use benchmarks from the corresponding performance year of the Shared
Savings Program and not the baseline year. Those benchmarks are also
known in advance of the performance period.
When no comparable data exists from the baseline period, then we
finalize that we will use information from the performance period (CY
2017 for the transition year, during which MIPS eligible clinicians may
report for a minimum of any continuous 90-day period, as discussed in
section II.E.4 of this final rule with comment period) to assess
measure benchmarks. In this case, while the benchmark methodology is
being finalized in this final rule with comment period, the numerical
benchmarks will not be known in advance of the performance period.
However, as discussed throughout this final rule with comment period,
we have added protections to protect MIPS eligible clinicians from poor
performance, particularly in the transition year.
Comment: Some commenters did not support the use of 2015 data or
other historical data to set the 2017 benchmarks, with one commenter
stating that CMS would be using data from periods during which MIPS did
not exist and requesting that CMS establish an adequate foundation for
[[Page 77279]]
benchmarks based on MIPS data. One commenter recommended that CMS not
set benchmarks or hold clinicians accountable for performance until it
has established an adequate foundation based on MIPS data. Another
emphasized using reliable and valid patient sample sizes or adequate
foundation of data to determine benchmarks even if only for limited
number of measures.
Response: In establishing the performance standards, we had to
choose between two feasible alternatives: Either develop benchmarks
based on historical data and provide the numerical benchmarks in
advance of the performance period; or use more current data for
benchmarks and not provide the numerical benchmarks in advance of the
performance period. We believe there is more value in providing advance
notice for quality performance category measures so that MIPS eligible
clinicians can set a clear performance goal for these measures,
provided that historical data is available. In many cases, MIPS quality
measures are the same as those available under PQRS, so we believe that
using PQRS data is appropriate for a MIPS benchmark. In contrast, we do
not believe there is more value in providing advance notice for cost
performance category measures since the claims data for the cost
performance category can vary due to payment policies, payment rate
adjustment and other factors. Therefore, we believe having the cost
performance category measures based on performance period data will be
more beneficial to MIPS eligible clinicians given that it is based on
more current data. For the cost performance category, we believe it is
more beneficial to base performance on the performance period.
Comment: A few commenters opposed our benchmarking approach, with
some opposing our proposal to separate benchmarks solely by submission
mechanism given that medical groups vary by size, location, specialty
and other factors which should be built into developing the benchmarks.
Commenters recommended specialty-specific benchmarks, benchmarking by
region, and benchmarks based on group size (for example, groups with
10-50 clinicians, 51-100 clinicians, 101-500 clinicians, 501-1,000
clinicians, and >1,000 clinicians). In other words, commenters did not
believe in one overall benchmark but rather that groups should be
compared only to other similar groups (for example, APM entities to APM
entities, individuals to individuals, clinicians by specialty and
groups to groups, small practices to small practices, or region by
region).
Response: We want the benchmarks to be as broad and inclusive as
possible and to establish a single performance standard whenever the
measure specifications are comparable. We finalized separate benchmarks
by submission mechanism only when the differences in specifications
make comparisons less valid. We do not believe differences in
specialty, group size, and region create an inherent need for separate
benchmarks as the specifications are comparable across each of these
categories. Furthermore, we do not expect differences in location,
practice size, and other characteristics to impact the quality of care
provided. We also want to keep robust sample sizes in each benchmark,
and stratifying a benchmark by different characteristics would risk
fragmenting the sample size in such a manner that we do not have a
valid benchmark for some measures.
We estimated quality performance scores by practice size based on
historical data and did not see a systematic difference in performance
by practice among MIPS eligible clinicians that submitted complete and
reliable data to require a need for separate benchmarks. However, as we
monitor the MIPS program, we will continue to evaluate whether we need
to further refine and stratify the benchmarks.
Comment: One commenter recommended that CMS should analyze the
quality performance data by looking at Medicare and non[hyphen]Medicare
populations separately, and should also examine whether stratifying the
performance data by specialty code, site[hyphen]of[hyphen]service code,
or both will result in more accurate measurement and fair adjustments
for physicians who treat the sickest patients.
Response: We want accurate and fair measurement in the MIPS
program. We have incorporated measures that have gone through public
review. In many cases, we believe the measure developers have
considered scenarios where risk adjustment is required to consider mix
of patient population and site-of-service and do not believe we need a
separate universal policy to further stratify performance by patient
mix, specialty, or site of service for all measures. As we move through
the transition year, however, we will continue to evaluate the need for
additional adjustments or stratification for informational purposes and
would make any proposed adjustments through future rulemaking.
Comment: One commenter expressed their belief that integrating data
from MIPS eligible clinicians participating in MIPS APMs with data from
MIPS eligible clinicians who do not participate in APMs will skew the
universe of reported data toward better performance, as MIPS APM
participants tend to be more advanced and well resourced, putting MIPS
eligible clinicians who do not participate in APMs at a disadvantage in
scoring. The commenter recommended segregating such data for purposes
of setting MIPS benchmarks for 2019 payment adjustments.
Response: As discussed above, we believe in having inclusive and
robust datasets as possible for benchmarks. We note that we are
building benchmarks by comparable submission mechanism and not all
submission mechanisms will have APM data; however, we believe it is
important to include APM participants when comparable information is
available because the benchmark represents the true distribution of
performance. We do not want to establish separate, potentially lower,
standards of care for clinicians who are not in APMs. In addition, as
more MIPS eligible clinicians transition to APMs, we may not have
sufficient volume to create benchmark based on MIPS eligible clinicians
alone.
Comment: A few commenters believed CMS should not allow a ``new''
physician's quality measure performance to count against the practice
under Quality Payment Program if they have not been with that practice
greater than 6 months. Another commenter recommended that CMS allow
physicians who practice less than 12 months to self-identify so that
their scoring can take into account the physician's limited data.
Response: We appreciate the commenter's feedback and will restrict
the data for the benchmarks to MIPS eligible clinicians and, as
discussed above, the benchmarks will include comparable APM data,
including data from QPs and Partial QPs. We believe these steps will
help ensure that the validity and completeness of the benchmark data.
Comment: Some commenters expressed concern regarding the
comparability of measures from different EHR vendor systems. One
commenter noted that data submitted from different EHR vendor systems
may use different methodologies, as well as inconsistent numerators and
denominators, and will therefore not be comparable across systems and
clinicians. This commenter recommended that CMS work with ONC to
standardize data submitted to Medicare across a number of vendor
systems. Another commenter requested
[[Page 77280]]
that CMS incorporate work by medical societies to implement guides to
ensure eCQM calculations and benchmarks are accurate and that different
EHRs are accurately capturing eCQMs. Another commenter cautioned that
in the case of EHRs, eCQMs are also not uniformly calculated across
EHRs, as several different administrative code sets are used. This
commenter recommended that CMS create standards and mapping tools to
facilitate working across these different codes, ensure consistency
when EHR data is exchanged, and ensure eCQM calculations and benchmarks
are accurate. The commenter also noted that different EHRs are more
accurate at capturing eCQMs.
Response: To date, there have been issues with EHR data accuracy
and consistency. We have worked with ONC to address these issues
through public feedback mechanisms, the availability of tools to
support eCQM testing and value set uploads, and by encouraging vendors
to consume the health quality measure format (HQMF) measure
specifications directly. As these improvements penetrate to all systems
in use by providers, we expect to see improvements in eCQM consistency.
We will continue to work with ONC to continue considering the
elimination of transitional code systems to further improve alignment
of the eCQM data elements, and we will continue to engage with sites
and stakeholder organizations to identify methods to further ensure
consistency across sites and systems.
Comment: Commenters generally supported our proposal to use the
Shared Savings benchmarks for CMS Web Interface. One commenter
supported our alternative approach of building our own benchmarks for
CMS Web Interface measures.
Response: We appreciate the commenters support and are finalizing
our proposal to use the Shared Savings benchmarks for the CMS Web
Interface. However, as we discuss in more detail below, we are adding a
floor of 3 points for each measure for the transition year. Therefore,
any values that are below the 30th percentile will receive a score of 3
points.
Comment: Some commenters agreed that 0 percent performance rates
should be excluded from benchmark calculations. One commenter suggested
including 0 percent performance rates in benchmark calculations but
distinguishing the data that was intentionally submitted from data that
was unintentionally submitted from EHR reporting. Another commenter
suggested rewarding clinicians that reported on a measure if more than
50 percent of MIPS eligible clinicians reported zero on that measure
and removing zeroes would artificially increase the benchmark for any
given measure.
Response: We appreciate that in some circumstances a 0 performance
rate may be a valid score; however, we are also concerned about skewing
the distribution with potentially inaccurate scores. We are finalizing
the policy to exclude 0 percent scores from the benchmarks for the
transition year. We will continue to evaluate the impact of 0 percent
scores on benchmarks. However, as described below, we are adding a
floor for the transition year of MIPS, which will limit the effect of
this adjustment on MIPS eligible clinicians' scores.
Comment: One commenter did not agree with our proposal to use the
Value Modifier approach to weight the performance of individuals and
groups by the number of beneficiaries to create a single set of
benchmarks. The commenter was concerned about combining both
individuals and groups into one set of benchmarks. The commenter
recommended simplifying the performance standards and incorporating
aspects of the Shared Savings Program and VM into this MIPS category.
Response: As discussed above, we believe that both individuals and
groups reporting through the same submission mechanism are comparable,
as the measure specifications are similar. In the proposed rule, we
proposed to combine the group and individual data into a single
benchmark by using the VM approach of patient weighting. However, after
further analysis, we do not believe this approach is appropriate for
the MIPS program.
The VM defines relative performance as statistical difference from
the mean for a measure, and weights each clinician's performance rate
by the number of beneficiaries to identify the average score for a
measure, a single unit. However, unlike the VM, in MIPS, we are not
defining relative performance by using a single point, but rather a
percentile distribution of the reliable clinician summary performance
scores. We have taken steps to ensure that each clinician or group
score meets certain standards to promote reliability at the group or
individual clinician level. For example, the group or individual
reporter must meet certain case volume and data completeness standards
to be included in the MIPS benchmark. In MIPS, weighting individual or
group values by the number of patients is similar to cloning or
replicating that individual or group score in the percentile
distribution. In a distribution benchmark, weighting will not have an
impact in the following cases: When the distribution of scores is
highly compressed (low variance); the distribution of cases is highly
compressed (such as, all practices have fairly similar numbers of
cases); or when the number of practices is large relative to the
typical number of eligible cases for any practice for the measure.
However, the difference between unweighted and weighted benchmarks is
more likely to have an impact is when the number of eligible cases and
corresponding performance scores vary widely across practices. The
difference will be exacerbated if there are relatively few practices
and/or if practices with especially high or low scores also have a
disproportionately large number of cases. For example, assume a given
benchmark has one large group and several smaller groups and individual
reporters. The large group cares for 20 percent of the beneficiaries
represented in the benchmark. If we weight the benchmark by patient
weight, then another MIPS eligible clinician with a score just above or
just below that performance rate will have a score that is different by
a point or two, not because of differences in performance but because
of differences in the number of beneficiaries cared for by the group or
individual MIPS eligible clinician.
Therefore, we are not finalizing our proposal to patient weight the
benchmarks. Instead, we will count each submission, either by
individual or group, as a single data point for the benchmark. We
believe this data is reliable and the revision simplifies the
combination of group and individual performance.
Comment: Some commenters did not agree with our proposal to use
performance period data to set benchmarks in instances where the
measure is a new measure or there is a change to an existing measure.
Instead, the commenter recommended just giving credit for reporting the
measure. Another commenter recommended that new measures receive a
score equal to the 90th percentile if the reporting rates are met.
Another commenter supported not scoring new quality measures until 2
years after introduction. Another commenter recommended that MIPS
eligible clinicians reporting new measures be held harmless from
negative scoring.
Response: To encourage meaningful measurement, we want to score all
available measures for performance, including new measures. However,
[[Page 77281]]
because new measures would not have a benchmark available prior to the
start of the performance period; we are creating a 3-point new measure
floor specifically for new measures and measures without a benchmark
based on baseline period data. This floor would be available annually
to any measure without a published benchmark. Generally, we would
expect new measures to have the 3-point floor for the first 2 years
until we get baseline data for that measure. This approach helps to
ensure that the MIPS eligible clinicians are protected from a poor
performance score that they would not be able to anticipate. As we
discussed in section II.E.6.a.(2)(b) below, we are also setting a
global 3-point floor for all submitted measures during the transition
year. We would like to note that the global 3-point floor for all
measures is a policy for the transition year of MIPS. In contrast, the
new measure 3-point floor for measures without a previously published
benchmark, such as new measures, would be available in future years of
MIPS and not just the transition year. We also note that the new
measure 3-point floor for measures without a previously published
benchmark, is different than class 2 measures, as defined later in
section II.E.6.a.(2)(c) of this rule and summarized in Table 17, that
lack a benchmark because we do not have a minimum of 20 MIPS eligible
clinicians who reported the measure meeting the case minimum and data
completeness requirements. The new measure 3-point floor allows MIPS
eligible clinicians to be scored on performance in which the lowest
score possible for a measure will be 3 points, and the highest possible
score is 10 points assuming the new measure has a benchmark and the
MIPS eligible clinician has met the case minimum and data completeness
criteria. However, the class 2 measures, as defined in Table 17, is not
a floor but rather an automatic score of 3 points, in which MIPS
eligible clinicians are not scored on performance and would only
receive 3 points for that measure.
We considered giving a set number of points for submitting a new
measure, rather than measuring performance. We do not think it is
equitable to give the maximum performance score (a score equal to the
90th percentile or the top decile) when other eligible clinicians may
receive fewer points based on performance.
Comment: Many commenters expressed support for our alternative
approach that if a MIPS eligible clinician reports a new measure under
the quality performance category, the MIPS eligible clinician will not
score lower than 3 points for that measure. One commenter agreed with
the assessment that this would encourage clinicians to report new
measures, prevent clinicians from gaming the system by reporting only
on new measures to avoid being compared to a benchmark, and still
incentivize better performance on the new measure. This commenter also
expressed support for the alternative to weight new measures less than
measures with existing benchmark data, stating that this will also
accomplish the above goals. Two commenters recommended that CMS apply
this minimum floor proposal both to the transition year in which the
measure is available in MIPS and to the first time the eligible
clinician reports on the measure. One commenter noted that this will
encourage reporting on new measures and help mitigate potential
unintended consequences.
Response: We are finalizing the alternative approach for the
scoring of new measures, or measures without a comparable historical
benchmark, to have a floor of 3 points until baseline data can be
utilized. We note that the floor only applies when the new measure does
not have a benchmark based on baseline data and not the first time the
eligible clinician reports on the measure in subsequent years.
In addition, for the transition year (first year) only, we are also
implementing a global floor of 3 points for all submitted quality
measures, not only new measures. This floor, along with changes in the
performance threshold, affords MIPS eligible clinicians the ability to
learn about MIPS and be protected from a negative adjustment in the
transition year for any level of performance.
Comment: One commenter noted that, while ensuring that an eligible
clinician reporting a new measure would not receive a score lower than
three points may incentivize reporting of new measures, the commenter
was concerned that doing so may artificially inflate the measure's
benchmark, and adversely affect clinicians reporting the measure in
year 2, during which time scoring would no longer be based on an
inflated benchmark. This commenter recommended that CMS establish
measure benchmarks based only on true measure performance instead of
potentially inflated, incentivized performance.
Response: We would like to note that the benchmarks are based on
the performance rates for the measures, not on the assigned points.
Therefore, the floor for new measures should not affect future
benchmarks. Table 16 has an example of how the floor would work.
Table 16--Example of Using Benchmarks for a Single Measure To Assign Points With a Floor of 3 Points
----------------------------------------------------------------------------------------------------------------
Sample quality Possible points Possible points
Benchmark decile measure with 3-point without 3-point
benchmarks (%) floor floor
----------------------------------------------------------------------------------------------------------------
Benchmark Decile 1........................................... 0.0-9.5 3.0 1.0-1.9
Benchmark Decile 2........................................... 9.6-15.7 3.0 2.0-2.9
Benchmark Decile 3........................................... 15.8-22.9 3.0-3.9 3.0-3.9
Benchmark Decile 4........................................... 23.0-35.9 4.0-4.9 4.0-4.9
Benchmark Decile 5........................................... 36.0-40.9 5.0-5.9 5.0-5.9
Benchmark Decile 6........................................... 41.0-61.9 6.0-6.9 6.0-6.9
Benchmark Decile 7........................................... 62.0-68.9 7.0-7.9 7.0-7.9
Benchmark Decile 8........................................... 69.0-78.9 8.0-8.9 8.0-8.9
Benchmark Decile 9........................................... 79.0-84.9 9.0-9.9 9.0-9.9
Benchmark Decile 10.......................................... 85.0-100 10 10
----------------------------------------------------------------------------------------------------------------
In this example, we still create an array of percentile
distributions for benchmarks and decile breaks. However, where we would
normally assign between 1.0-2.9 points for MIPS eligible clinicians
with performance in the first or second deciles (in this example,
performance between 0 and 15.7 percent), we will now assign 3.0
[[Page 77282]]
points. In future years, however, as baseline data becomes available
for new measures, we would remove the floor and assign points less than
3, as illustrated above. For example, a performance rate of 9.6 percent
(start of the 2nd decile), would receive 3.0 points with the floor and
only 2.0 points without the floor. This methodology will not affect the
scoring for MIPS eligible clinicians with performance in the third
decile or higher. In addition, this methodology will not affect the
calculation of future benchmarks. We do note, however, that if a MIPS
eligible clinician consistently has poor performance, then by the time
the baseline data can be used, the MIPS eligible clinician may receive
fewer points because the floor has been removed.
After consideration of the comments on quality measure benchmarks,
we are finalizing many policies as proposed. Specifically:
For quality measures for which baseline period data is
available, we are establishing at Sec. 414.1380(b)(1)(i) measure
benchmarks are based on historical performance for the measure based on
a baseline period. Each benchmark must have a minimum of 20 individual
clinicians or groups who reported the measure meeting the data
completeness requirement and minimum case size criteria and performance
greater than zero. We will restrict the benchmarks to data from MIPS
eligible clinicians, and, as discussed above, comparable APM data,
including data from QPs and Partial QPs.
We will publish the numerical baseline period benchmarks prior to
the start of the performance period (or as soon as possible
thereafter).
For quality measures for which there is no comparable data
from the baseline period, we are establishing at Sec.
414.1380(b)(1)(ii) that CMS will use information from the performance
period to create measure benchmarks. We will publish the numerical
performance period benchmarks after the end of the performance period.
In section II.E.4 of this final rule with comment period, we are
finalizing that for the transition year, the performance period will be
a minimum of any continuous 90-day period within CY 2017. Therefore,
for MIPS payment year 2019, we will use data submitted for performance
in CY 2017, during which MIPS eligible clinicians may report for a
minimum of any continuous 90-day period.
We are establishing at Sec. 414.1380(b)(1)(iii) separate
benchmarks are used for the following submission mechanisms: EHR
submission options; QCDR and qualified registry submission options;
claims submission options; CMS Web Interface submission options; CMS-
approved survey vendor for CAHPS for MIPS submission options, and
administrative claims submission options. As discussed above, we are
not stratifying benchmarks by other practice characteristics, such as
practice size. For the reasons discussed above, we do not believe that
there is a compelling rationale for such an approach, and we believe
that stratifying could have unintended negative consequences for the
stability of the benchmarks, equity across practices, and quality of
care for beneficiaries. However, we continue to receive feedback that
small practices should have a different benchmark, so we seek comment
on any rationales for or against stratifying by practice size we may
not have considered.
We are establishing at Sec. 414.1380(b)(1)(ii)(A) that
the CMS Web Interface submission will use benchmarks from the
corresponding reporting year of the Shared Savings Program. We will
post the MIPS CMS Web Interface benchmarks in the same manner as the
other MIPS benchmarks. We are not building CMS Web Interface-specific
benchmarks for the MIPS. We will apply the MIPS scoring methodology to
each measure. Measures below the 30th percentile will be assigned a
value of 3 points during the transition year to be consistent with the
global floor established in this rule for other measures. We will
revisit this global floor for future years.
We are modifying our proposed policy with regards to patient
weighting. Based on public comments, we are not finalizing our proposal
to weight the performance rate of each MIPS eligible clinician and
group submitting data on the quality measure by the number of
beneficiaries used to calculate the performance rate. Instead, we will
count each submission, either by an individual or group, as a single
data point for the benchmark. We believe the original proposal could
create potential unintended distortions in the benchmark. Therefore we
believe it is more appropriate to use a distribution of each individual
or group submission that meets our criteria to ensure reliable and
valid data.
We are also modifying our proposed policy for scoring new measures.
Based on public comments, for the transition year and subsequent years
of MIPS, we are adding protection against being unfairly penalized for
poor performance on measures without benchmarks by finalizing a 3-point
floor for new measures and measures without a benchmark. As discussed
in more detail in the next section, for the transition year of MIPS we
are also finalizing a 3-point floor for all submitted measures. We will
revisit this policy in future years.
(b) Assigning Points Based on Achievement
We proposed in Sec. 414.1380(b)(1)(x) of the proposed rule (81 FR
28251) to establish benchmarks using a percentile distribution,
separated into deciles, because it translates measure-specific score
distributions into a uniform distribution of MIPS eligible clinicians
based on actual performance values. For each set of benchmarks, we
proposed to calculate the decile breaks for measure performance and
assign points for a measure based on the benchmark decile range in
which the MIPS eligible clinician's performance rate on the measure
falls. For example, MIPS eligible clinicians in the top decile would
receive 10 points for the measure, and MIPS eligible clinicians in the
next lower decile would receive points ranging from 9 to 9.9. We
proposed to assign partial points to prevent performance cliffs for
MIPS eligible clinicians near the decile breaks. The partial points
would be assigned based on the percentile distribution.
Table 17 of the proposed rule (81 FR 28252) illustrated an example
of using decile points along with partial points to assign achievement
points for a sample quality measure. We noted in the proposed rule (81
FR 28252) that any MIPS eligible clinician who reports some level of
performance would receive a minimum of one point for reporting if the
measure has the required case minimum, assuming the measure has a
benchmark.
We did not propose to base scoring on decile distributions for the
same measure ranges as described in Table 17 of the proposed rule when
performance is clustered at the high end (that is, ``topped out''
measures), as true variance cannot be assessed. MIPS eligible
clinicians report on different measures and may elect to submit
measures on which they expect to perform well. For MIPS eligible
clinicians electing to report on measures where they expect to perform
well, we anticipated many measures would have performance distributions
clustered near the top. We proposed to identify ``topped out'' measures
by using a definition similar to the definition used in the Hospital
VBP Program: Truncated
[[Page 77283]]
Coefficient of Variation \22\ is less than 0.10 and the 75th and 90th
percentiles are within 2 standard errors; \23\ or median value for a
process measure that is 95 percent or greater (80 FR 49550).\24\
---------------------------------------------------------------------------
\22\ The 5 percent of MIPS eligible clinicians with the highest
scores, and the 5 percent with lowest scores are removed before
calculating the Coefficient of Variation.
\23\ This is a test of whether the range of scores in the upper
quartile is statistically meaningful.
\24\ This last criterion is in addition to the HVBP definition.
---------------------------------------------------------------------------
Using 2014 PQRS quality reported data measures, we modeled the
proposed benchmark methodology and identified that approximately half
of the measures proposed under the quality performance category are
topped out. Several measures have a median score of 100 percent, which
makes it difficult to assess relative performance needed for the
quality performance category score.
However, we did not believe it would be appropriate to remove
topped out measures at this time. As not all MIPS eligible clinicians
would be required to report these measures under our proposals for the
quality performance category in section II.E.5.b. of the proposed rule
(81 FR 28184), it would be difficult to determine whether a measure is
truly topped out or if only excellent performers are choosing to report
the measure. We also believed removing such a large volume of measures
would make it difficult for some specialties to have enough applicable
measures to report. At the same time, we did not believe that the
highest values on topped out measures convey the same meaning of
relative quality performance as the highest values for measures that
are not topped out. In other words, we did not believe that eligible
clinicians electing to report topped out process measures should be
able to receive the same maximum score as eligible clinicians electing
to report preferred measures, such as outcome measures.
Therefore, we proposed to modify the benchmark methodology for
topped out measures. Rather than assigning up to 10 points per measure,
we proposed to limit the maximum number of points a topped out measure
can achieve based on how clustered the scores are. We proposed to
identify clusters within topped out measures and would assign all MIPS
eligible clinicians within the cluster the same value, which would be
the number of points available at the midpoint of the cluster. That is,
we proposed to take the midpoint of the highest and lowest scores that
would pertain if the measure was not topped out and the values were not
clustered. We proposed to only apply this methodology for benchmarks
based on the baseline period. When we develop the benchmarks, we would
identify the clusters and state the points that would be assigned when
the measure performance rate is in a cluster. We proposed to notify
MIPS eligible clinicians when those benchmarks are published with
regard to which measures are topped out.
We proposed this approach because we wanted to encourage MIPS
eligible clinicians not to report topped out measures, but to instead
choose other measures that are more meaningful. We also sought feedback
on alternative ways and an alternative scoring methodology to address
topped out measures so that topped out measures do not
disproportionately affect a MIPS eligible clinician's quality
performance category score. Other alternatives could include placing a
limit on the number of topped out measures MIPS eligible clinicians may
submit or reducing the weight of topped out measures. We also
considered whether we should apply a flat percentage in building the
benchmarks, similar to the Shared Savings Program, where MIPS eligible
clinicians are scored on their percentage of their performance rate and
not on a decile distribution and requested comment on how to apply such
a methodology without providing an incentive to report topped out
measures. Under the Shared Savings Program, 42 CFR 425.502, there are
circumstances when benchmarks are set using flat percentages. For some
measures, benchmarks are set using flat percentages when the 60th
percentile was equal to or greater than 80.00 percent, effective
beginning with the 2014 reporting year (78 FR 74759-74763). For other
measures benchmarks are set using flat percentages when the 90th
percentile was equal to or greater than 95.00 percent, effective
beginning in 2015 (79 FR 67925). Flat percentages allow those with high
scores to earn maximum or near maximum quality points while allowing
room for improvement and rewarding that improvement in subsequent
years. Use of flat percentages also helps ensure those with high
performance on a measure are not penalized as low performers. We also
noted that we anticipate removing topped out measures over time, as we
work to develop new quality measures that will eventually replace these
topped out measures. We requested feedback on these proposals.
The following is a summary of the comments we received regarding
our proposal to assign points based on achievement.
Comment: Many commenters supported the use of the decile scoring
method for non-topped-out measures, including the partial point
allocation, but some cautioned that without stronger clarification, the
scoring complexity would create considerable confusion among MIPS
eligible clinicians. One commenter wanted to know how CMS would capture
partial credit in the quality performance category. The commenter also
wanted to know if there is a standardized grading scale used to
determine where a clinician/practice might fall between 0-10 points.
Response: We appreciate the support for the decile scoring. We are
finalizing the decile scoring method for assigning points, but for the
transition year, we are also adding a 3-point floor for all submitted
measures, as well as for the readmission measure (if the readmission
measure is applicable). This means that MIPS eligible clinicians will
receive between 3 and 10 points per reported measure. We note that this
scoring method allows partial credit because the MIPS eligible
clinician can still achieve points even if the MIPS eligible clinician
does not submit all the required measures. For example, if the MIPS
eligible clinician has six applicable measures yet only submits two
measures, then we will score the two submitted measures. However, the
MIPS eligible clinician will receive a 0 for every required measure
that is not submitted.
Comment: A few commenters requested that CMS not use quality-
tiering in MIPS given that regardless of the investment in quality,
most MIPS eligible clinicians will receive an average score.
Response: We are not using the quality-tiering methodology in MIPS.
We are shifting to the decile scoring system, and, unlike quality
tiering, we expect performance to be along a continuum.
Comment: Other commenters were concerned about the scoring
criteria, which they believed would not offer guaranteed success just
for reporting. Commenters stated that benchmarks and performance
standards remain undefined and return on investment is uncertain and
requested that CMS revise the quality scoring so that half of the
quality score is granted to any practice that just attempts to report.
Response: We would like to note that MACRA requires us to measure
performance, not reporting. During this transition year, though, we
believe it is important for MIPS eligible clinicians to
[[Page 77284]]
learn to participate in MIPS, be rewarded for good performance, and be
protected from being unfairly subjected to negative payment
adjustments. Therefore, in addition to scoring measures on performance,
we will give at least 3 points for each quality measure that is
submitted under MIPS, as well as for the readmission measure (if the
readmission measure is applicable). With the lowered performance
threshold described in section II.E.7.c. of this final rule with
comment period, this will ensure that MIPS eligible clinicians that
submit quality data will receive at least a neutral payment adjustment
or a small positive payment adjustment.
Comment: A few commenters did not support the decile approach. One
commenter proposed that CMS model quality scoring on the advancing care
information performance category scoring with a target point total and
the ability to exceed that total, and another commenter recommended
using flat percentages. One commenter opposed using percentiles,
deciles or any other rank-based statistics for performance ranking used
for payment adjustments because it does not generate information on
statistically significant performance at either end of the performance
spectrum and hides real differences that could lead to effective
quality improvement. The commenter also believed the proposed approach
will always penalize a certain proportion of clinicians. This commenter
recommended a methodology which uses some basis of statistical
significance or classification based on the underlying spread of the
distribution.
Response: All scoring systems have limitations, but we believe the
proposed scoring system is appropriate for MIPS. For measures for which
there is baseline data, our scoring system bases the benchmarks on this
data. This structure aligns with the HVBP and creates benchmarks that
are achievable. In addition, we were striving for simplicity, and we
believe that comparison to these benchmarks is well aligned. This
approach brings attention to measure performance and focuses on quality
improvement. We did not propose the flat percentage option as not all
measures are structured as a percentage. Finally, we elected not to
base the benchmark distribution on statistical significance because
those methods can be more difficult to explain, monitor and track. We
note also that relative performance is embedded in the MIPS payment
adjustment, which is applied to the final score on a linear scale. We
are finalizing at Sec. 414.1380(b)(1)(ix) to score performance using a
percentile distribution, separated by decile categories.
Comment: One commenter encouraged CMS to incorporate health equity
into a clinician's quality achievement score in future years.
Response: We will consider this feedback in future rulemaking.
Comment: On commenter requested clarification on how the CAHPS for
MIPS survey would be scored. The commenter asked if CMS intended to
create a single CAHPS for MIPS overall mean score roll[hyphen]up or if
CMS would score each summary survey measure (SSM) individually to
create a CAHPS for MIPS average score.
Response: Each SSM will have an individual benchmark. We will score
each SSM individually and compare it against the benchmark to establish
the number of points. The CAHPS score will be the average number of
points across SSMs.
Comment: Many commenters supported retaining topped out measures
and allowing topped out measures to be awarded the maximum number of
points. Commenters emphasized that topped out measures allow more
specialties to report and that the proposed lower point assignment to
topped out measures put clinicians that have limited ability to report
and track performance over time at a distinct disadvantage. For this
reason, commenters recommended awarding equal points for topped out and
non-topped out measures by maintaining the 10-point maximum value, at
least in the transition year. Commenters also cited a lack of
transparency in how topped out measures are identified, the existing
complexity in the quality scoring approach, the fact that measures that
are recognized as topped out nationally might not be topped out
regionally or locally, and a belief that topped out measures are only
reported by a small percentage of eligible physicians for any
particular measure. Commenters recommend not removing topped out
measures for at least 3 years since it takes that timeframe for new
measures to be developed to replace topped out measures and because
some topped out measures are critical to clinical care; however, other
commenters recommended removing topped out measures since such measures
will not appropriately reward high performance. Another commenter
requested a year's notice prior to removal.
Response: We agree that MIPS eligible clinicians should understand
which measures are topped out. Therefore, we are not going to modify
scoring for topped out measures until the second year the measure has
been identified as topped out. The first year that any measure can be
identified as topped out is the transition year, that is, the CY 2017
performance period. Thus, we will not modify the benchmark methodology
for any topped out measures for the CY 2017 performance period. We will
modify the benchmark methodology for topped out measures beginning with
the CY 2018 performance period, provided that it is the second year the
measure has been identified as topped out. We seek comment on whether,
for the second year a measure is topped out, to use a mid-cluster
scoring approach, flat rate percentage approach or to remove topped out
measures at this time.
Comment: Some commenters recommended that if topped out measures
are to be scored differently, we should use the Shared Savings Program
approach, not the Hospital VBP approach. One commenter suggested that
CMS review these measures after the first performance period to re-
evaluate topped out designations. One commenter noted that the
methodology for distinguishing topped out measures is flawed since a
narrow performance gap only means that performance is high for the
cohort of reporting providers and does not reflect the performance of
the rest of the population to whom the measure may be applicable. This
commenter stated that many of the measures CMS that had deemed topped
out were not implemented in PQRS long enough for robust data to have
been collected to confirm that designation and thus requested that CMS
remove the topped out designation.
Response: As noted above, we are not creating a separate scoring
system for topped out measures until the second year that the measure
has been identified as topped out based on the baseline quality scores
(for example, 2015 performance for the 2017 performance year). Our
methodology for selecting topped out measures uses all information
available to us. Because we offer the flexibility for most MIPS
eligible clinicians to select the measures most relevant to their
practice, we generally cannot assess the performance of clinicians on
measures that the clinicians do not elect to submit. However, we can
assess the performance of clinicians for the readmission measure which
is not submitted but which is calculated from administrative claims
data. We note that we are not removing topped out measures and that the
designation can change if data collection practices and results change.
We recognize that the MIPS scoring algorithm may not work as
[[Page 77285]]
well for topped out measures; however, for the transition year, we have
added protections in place to ensure that MIPS eligible clinicians who
report at least one quality measure are protected from being unfairly
subjected to a negative adjustment. We also intend to reduce the number
of topped out measures in MIPS in future years.
Comment: Commenters requested more transparency in how topped out
measures were identified and stressed the importance of identifying
topped out measures and the benchmarks for each of before finalizing a
separate scoring system for such measures. Some commenters recommended
listing them in the final rule with comment period, defining the
rationale for maintaining them, and that if advance notice is not
possible, topped out measure points should not be reduced. One
commenter recommended that we allow the public to provide feedback
before designating a measure as topped out to explain why it might
appear as such. Another commenter noted that insufficient data is
available to determine whether a measure is truly topped out or whether
only high performers might have chosen to report a given measure.
Response: We agree that MIPS eligible clinicians should understand
which measures are topped out. We will take these comments into
consideration for future rulemaking. As discussed above, we are not
going to modify scoring for topped out measures until the second year
the measure has been identified as topped out.
We plan to identify topped out measures for benchmarks based on the
baseline period when we post the detailed measures specifications and
the measure benchmarks prior to the start of the performance period.
This will count as the first year a measure is identified as topped
out. The second year the same measure is topped out, we will apply a
topped out measure scoring standard beginning in performance periods
occurring in 2018. We note as reflected above we are seeking comment on
the topped out measure scoring standard. We also plan to identify
topped out measures for benchmarks based on the performance period.
Comment: Most commenters recommended not limiting the number of
topped out measures clinicians can submit, with one commenter asking
for clarification on whether reporting additional topped out measures
would allow a clinician to reach the maximum quality performance
category score. Another commenter supported limiting MIPS eligible
clinicians to reporting no more than two topped out measures to avoid
potential ``gaming''.
Response: For the transition year of MIPS, we are not going to
limit the number of topped out measures a clinician can submit. Thus,
reporting topped out measures could potentially allow a clinician to
reach the maximum quality performance category score since the MIPS
eligible clinician could receive 10 points for each topped out measure
submitted. We will continue to monitor and evaluate the impact of
topped out measures and should we deem it necessary, we would propose a
limitation of how many topped out measures could be reported through
future rulemaking.
Comment: One commenter recommended that CMS reweight topped out
measures so as not to impose an unavoidable penalty on specialists.
Another commenter suggested CMS re-evaluate and consider expanding its
criteria for topped out measures to ensure clinicians' relative quality
performance is fairly and accurately tied to payment, while still
ensuring that specialists have a sufficient number of measures to
select from under MIPS.
Response: We share the concerns that topped out measures may
disproportionately affect different specialties. We plan to publicly
post which measures are topped out so that commenters will be able to
plan accordingly. In addition, for the transition year of MIPS, we are
not modifying the scoring for topped out measures. Instead, scoring for
topped out measures will be the same as scoring for all other measures.
We will continue to monitor and evaluate the impact of topped out
measures by various MIPS eligible clinician practice characteristics.
We will propose any additional policy changes through future
rulemaking. Further, we encourage stakeholders to create new measures
that can be used in the MIPS program to replace any topped out
measures.
Comment: One commenter recommended removing topped out measures
from the CMS Web Interface measures.
Response: We are not proposing to remove topped out measures for
MIPS in the transition year, and we do not believe it would be
appropriate to remove topped out measures from the CMS Web Interface.
The CMS Web Interface measures are used in MIPS and in APMs such as the
Shared Savings Program. We have aligned policies where possible,
including using the Shared Savings Program benchmarks for the CMS Web
Interface measures. We believe any modifications to the CMS Web
Interface measures should be coordinated with Shared Savings Program
and go through rulemaking.
Comment: One commenter was concerned about our comment in the
proposed rule that approximately half of the MIPS quality measures are
topped out and that several have a median score of 100 percent.
Response: We share the commenter's concerns that so many measures
are topped out and show little variation in performance. It is unclear
if this result is truly due to lack of variation in performance or
clinicians are only submitting measures for which they have a good
performance. We believe that MIPS eligible clinicians generally should
have the flexibility to select measures most relevant to their
practice, but one trade-off is not all MIPS eligible clinicians are
reporting the same measure. Because removing such a large volume of
measures would make it difficult for some specialties to have enough
applicable measures to submit, we are not removing these measures from
MIPS. As discussed above, we will identify these measures for year 1,
but we will not modify the scoring of topped out measures until the
second year they have been identified.
Comment: One commenter recommended that CMS identify topped out
measures as measures with a median performance rate over 95 percent
because the definition is easier to understand. Another commenter
requested further clarification on the definition of topped out
measures.
Response: We agree that, for process measures that are scored
between 0 and 100 percent, using a median greater than 95 percent is a
simple way to identify topped out measures. For process measures, we
are modifying our proposal to identify topped out measures as those
with a median performance rate of 95 percent or higher. For other
measures, we are finalizing our proposal to identify topped out
measures by using a definition similar to the definition used in the
Hospital VBP Program: Truncated Coefficient of Variation is less than
0.10 and the 75th and 90th percentiles are within 2 standard errors.
Comment: One commenter recommended that CMS use historical data to
analyze whether allowing clinicians to choose an unrestricted
combination of six quality measures out of hundreds of measures would
lead to a topped out effect among final scores, and to devise an
alternative MIPS measure selection methodology should it find that
average final scores are universally inflated. Commenter also
recommended that CMS remove topped out measures from the list of
quality
[[Page 77286]]
measures that MIPS eligible clinicians have to choose from, as measures
that generate universally high performance scores fail to appropriately
reward performance with higher payment.
Response: We plan to continue evaluating the impact of topped out
measures in the MIPS program. Because removing such a large volume of
measures would make it difficult for some specialties to have enough
applicable measures to report, we are not removing these measures from
MIPS in year 1. As discussed above, we will identify these measures for
year 1, but we will not modify the scoring of topped out measures until
the second year they have been identified.
After consideration of the comments, we are not finalizing all of
our policies as proposed.
We are establishing that the performance standard with respect to
the quality performance category is measure-specific benchmarks.
Specifically, we are finalizing at Sec. 414.1380(b)(1) that, for the
2017 performance period, MIPS eligible clinicians receive three to ten
achievement points for each scored quality measure in the quality
performance category based on the MIPS eligible clinician's performance
compared to measure benchmarks. A MIPS quality measure must have a
measure benchmark to be scored based on performance. MIPS quality
measures that do not have a benchmark will not be scored based on
performance. Instead, these measures will receive 3 points for the 2017
performance period.
We are finalizing at Sec. 414.1380(b)(1)(ix), that measures
submitted by MIPS eligible clinicians are scored using a percentile
distribution, separated by decile categories. As discussed below, for
MIPS payment year 2019, topped out quality measures are not scored
differently than quality measures that are not considered topped out.
At Sec. 414.1380(b)(1)(x), we finalize that for each set of
benchmarks, CMS calculates the decile breaks for measure performance
and assigns points based on which benchmark decile range the MIPS
eligible clinician's measure rate is between. At Sec.
414.1380(b)(1)(xi) we assign partial points based on the percentile
distribution. In Sec. 414.1380(b)(1)(xii) MIPS eligible clinicians are
required to submit measures consistent with Sec. 414.1335.
Based on public comments, we are finalizing a modification to our
proposal for the benchmark methodology for topped out measures.
Specifically, we will not modify the benchmark methodology for topped
out measures for the first year that the measure has been identified as
topped out. Rather, for the first year the measure has been identified
as topped out we will score topped out measures in the same manner as
other measures until the second year the measure has been identified as
topped out. The first year that any measure can be identified as topped
out is the transition year, that is, the CY 2017 performance period.
Thus, we will not modify the benchmark methodology for any topped out
measures for the CY 2017 performance period. We will modify the
benchmark methodology for topped out measures beginning with the CY
2018 performance period, provided that it is the second year the
measure has been identified as topped out. We seek comment on how
topped out measures would be scored provided that it is the second year
the measure has been identified as topped out. One option would be to
score the measures using a mid-cluster approach. Under this approach,
beginning with the CY 2018 performance period, we would limit the
maximum number of points a topped out measure can achieve based on how
clustered the scores are. We would identify clusters within topped out
measures and assign all MIPS eligible clinicians within the cluster the
same value, which will be the number of points available at the
midpoint of the cluster. That is, we would take the midpoint of the
highest and lowest scores that would pertain if the measure were not
topped out and the values were not clustered. We would only apply this
methodology for measures with benchmarks based on the baseline period.
When we develop the benchmarks, we would identify the clusters and
state the points that would be assigned when the measure performance
rate is in a cluster. We would notify MIPS eligible clinicians when
those benchmarks are published with regard to which measures are topped
out. Another approach would be to remove topped out measures in the CY
2018 performance period, provided that it is the second year the
measure has been identified as topped out. In this instance, we would
not score these measures. Finally, a third approach would be to apply a
flat percentage in building the benchmarks for topped out measures,
similar to the Shared Savings Program, where MIPS eligible clinicians
are scored on the performance rate rather than their place in the
performance rate distribution. We request comment on how to apply such
a methodology without providing an incentive to report topped out
measures. Under the Shared Savings Program, 42 CFR 425.502, there are
circumstances when benchmarks are set using flat percentages. For some
measures, benchmarks are set using flat percentages when the 60th
percentile was equal to or greater than 80.00 percent, effective
beginning with the 2014 reporting year (78 FR 74759-74763). For other
measures benchmarks are set using flat percentages when the 90th
percentile was equal to or greater than 95.00 percent, effective
beginning in 2015 (79 FR 67925). Flat percentages allow those with high
scores to earn maximum or near maximum quality points while allowing
room for improvement and rewarding that improvement in subsequent
years. Use of flat percentages also helps ensure those with high
performance on a measure are not penalized as low performers. We seek
comment on each of these three options. Finally, we also note that we
anticipate removing topped out measures over time, as we work to
develop new quality measures that will eventually replace these topped
out measures. We seek comment on at what point in time should measures
that are topped out be removed from the MIPS.
We are modifying our proposed approach to identify topped out
measures. We had proposed to identify all topped out measures by using
a definition similar to the definition used in the Hospital VBP
Program: Truncated Coefficient of Variation \25\ is less than 0.10 and
the 75th and 90th percentiles are within 2 standard errors; \26\ or
median value for a process measure that is 95 percent or greater (80 FR
49550).\27\ However, for process measures, we are defining at Sec.
414.1305 topped out process measures as those with a median performance
rate of 95 percent or higher. For other measures, we are defining at
Sec. 414.1305 topped out non-process measures using a definition
similar to the definition used in the Hospital VBP Program: Truncated
Coefficient of Variation is less than 0.10 and the 75th and 90th
percentiles are within 2 standard errors.
---------------------------------------------------------------------------
\25\ The 5 percent of MIPS eligible clinicians with the highest
scores, and the 5 percent with lowest scores are removed before
calculating the Coefficient of Variation.
\26\ This is a test of whether the range of scores in the upper
quartile is statistically meaningful.
\27\ This last criterion is in addition to the HVBP definition.
---------------------------------------------------------------------------
In addition, as discussed in section II.E.6.a.(2)(a) of this final
rule with comment period, we will add a global 3-point floor for all
submitted measures for the transition year by assigning the decile
breaks for measure performance between 3 and 10 points. We will revisit
[[Page 77287]]
this policy in future years. Adding this floor responds to public
comments for protections against being unfairly penalized for low
performance. Table 16 in section II.E.6.a.(2)(a) illustrates an example
of using decile points along with the addition of the 3-point floor to
assign achievement points for a sample quality measure. The methodology
in this example could apply to measures where the benchmark is based on
the baseline period or for new measures where the benchmark is based on
the performance period, assuming the measures meet the case minimum
requirements and have a benchmark. We will continue to apply the new
measure 3-point floor for measures without baseline period benchmarks
for performance years after the first transition year. As discussed in
section II.E.6.a.(2)(g)(ii) of this final rule with comment period, CMS
Web Interface measures below the 30th percentile will be assigned a
value of 3 points during the transition year to be consistent with
other submission mechanisms. For the transition year, the 3-point floor
will apply for all submitted measures regardless of whether they meet
the case minimum requirements or have a benchmark, with the exception
of measures submitted through the CMS Web Interface, which must still
meet the case minimum requirements and have a benchmark in order to be
scored. All submitted measures, regardless of submission mechanism,
must meet the case minimum requirements, data completeness
requirements, and have a benchmark in order to be awarded more than 3
points. We will revisit this policy in future years.
We provide some examples below of the total possible points that
MIPS eligible clinicians could receive under the quality performance
category under our revised methodology. As described in section
II.E.5.b. of this rule, MIPS eligible clinicians are required to submit
six measures or measures from a specialty measure set, and we would
also score MIPS eligible clinicians on the all-cause hospital
readmission measure for groups of 16 or more with sufficient case
volume (200 cases). The total possible points for the quality
performance category would be 70 points for groups of 16 or more
clinicians (6 submitted measures x 10 points + 1 all-cause hospital
readmission measure x 10 points = 70). Further, the total possible
points for small practices of 15 or fewer clinicians and solo
practitioners and MIPS individual reporters (or for groups with less
than 200 cases for the readmission measure) would be 60 points (6
submitted measures x 10 points = 60) because the all-cause hospital
readmissions measure would not be applicable.
However, for groups reporting via CMS Web Interface and that have
sufficient case volume for the readmission measure, the total possible
points for the quality performance category would vary between 120-150
points as discussed in Table 24 in section II.E.6.a.(2)(g)(ii) of this
rule. If all measures are reported, then the total possible points is
120 points: (11 measures x 10 points) + (1 all-cause hospital
readmission measures x 10 points) = 120; for those groups with
sufficient case volume (200 cases) to be measured on readmissions. We
discuss in section II.E.6.a.(2)(g)(ii) why the total possible points
vary based on whether measures without a benchmark are reported. For
other CMS Web Interface groups without sufficient volume for the
readmissions measure, the readmission measure will not be scored, and
the total possible points for the quality performance category would
vary between 110-140 points, instead of 120-150 as discussed in section
II.E.6.a.(2)(g)(ii).
(c) Case Minimum Requirements and Measure Reliability and Validity
We seek to ensure that MIPS eligible clinicians are measured
reliably; therefore, we proposed at Sec. 414.1380(b)(1)(iv) to use for
the quality performance category measures the case minimum requirements
for the quality measures used in the 2018 VM (see Sec. 414.1265): 20
cases for all quality measures, with the exception of the all-cause
hospital readmissions measure, which has a minimum of 200 cases. We
referred readers to Table 46 of the CY 2016 PFS final rule (80 FR
71282), which summarized our analysis of the reliability of certain
claims-based measures used for the 2016 VM payment adjustment. MIPS
eligible clinicians that report measures with fewer than 20 cases (and
the measure meets the data completeness criteria) would receive
recognition for submitting the measure, but the measure would not be
included for MIPS quality performance category scoring. Since the all-
cause hospital readmissions measure does not meet the threshold for
what we consider to be moderate reliability for solo practitioners and
groups of less than ten MIPS eligible clinicians for purposes of the VM
(see Table 46 of the CY 2016 PFS final rule, referenced above), for
consistency, we proposed to not include the all-cause hospital
readmissions measure in the calculation of the quality performance
category for MIPS eligible clinicians who individually report, as well
as solo practitioners or groups of two to nine MIPS eligible
clinicians.
We also proposed that if we identify issues or circumstances that
would impact the reliability or validity of a measure score, we would
also exclude those measures from scoring. For example, if we discover
that there was an unforeseen data collection issue that would affect
the integrity of the measure information, we would not include that
measure in the quality performance category score. If a measure is
excluded, we would recognize that the measure had been submitted and
would not disadvantage the MIPS eligible clinicians by assigning them
zero points for a non-reported measure.
The following is a summary of the comments we received regarding
our proposal to score measures with minimum case volume and validity.
Comment: Several commenters were generally supportive of the 20
case minimum requirement.
Response: We appreciate the support from these commenters and are
finalizing our proposed approach of the 20 case minimum requirement for
all measures except the all-cause hospital readmission measure. We are
keeping the 200 case minimum for the all-cause readmission measure;
however, as we are defining small groups as those with 15 or fewer
clinicians, we are revising our proposal to not apply the readmission
measure to solo practices or to groups with 2-9 clinicians. Rather, for
consistency, we will not apply the readmission measure to solo
practices or small groups (groups with 15 or fewer clinicians) or MIPS
individual reporters.
Comment: One commenter noted that clinicians attempting to
participate, even if they are unable to meet the minimum case
requirements, should still be acknowledged for making the attempt,
especially if they are showing year-over-year improvement.
Response: We agree that MIPS eligible clinicians should receive
acknowledgement for participating; however, we also have to balance
this with the ability to accurately measure performance. For the
transition year, we are modifying our proposed approach on how we will
score submitted measures that are unreliable because, for example, they
are below the case minimum requirements. These measures will not be
scored based on performance against a benchmark, but will receive an
automatic score of three points. We believe this policy will simplify
quality scoring in that it ensures that every clinician that submits
quality data will receive a quality score.
[[Page 77288]]
This is particularly important in the transition year because with a
minimum 90-day performance period, we anticipate more MIPS eligible
clinicians will submit measures below the case minimum requirements. We
selected three points because we did not want to provide more credit
for reporting a measure that cannot be reliably scored against a
benchmark than for measures for which we can measure performance
against a benchmark. In Table 17, we summarize two classes of measures:
``class 1'' are those measures for which performance can be reliably
scored against a benchmark, and ``class 2'' are measures for which
performance cannot be reliably scored against a benchmark.
Additionally, we seek comment on whether we should remove non-outcomes
measures for which performance cannot reliably be scored against a
benchmark (for example, measures that do not have 20 reporters with 20
cases that meet the data completeness standard) for 3 years in a row.
We believe it would be appropriate to remove outcomes measures under a
separate timeline as we expect reporting of such measures to increase
more slowly; further, we want to encourage the availability of outcomes
measures.
Comment: One commenter wanted to know whether a MIPS eligible
clinician will receive credit for reporting a measure even if the MIPS
eligible clinician's measure data indicates that the measure activity
was never performed. Another commenter supported the proposal to allow
MIPS eligible clinicians to receive credit for any measures that they
report, regardless of whether the MIPS eligible clinician meets the
quality performance category submission criteria.
Response: As summarized in Table 17, for the transition year,
measures that are submitted with a 0 percent performance rate
(indicating that the measure activity was never performed) will receive
3 points. Measures that are below the case minimum requirement, or lack
a benchmark (as discussed in section II.E.6.a (2)(a) or do not meet the
data completeness requirements will also receive 3 points. However, we
acknowledge that these policies do not reflect our goals for MIPS
eligible clinicians' performance under this program. Rather, we aim for
complete and accurate reporting that reflects meaningful efforts to
improve the quality of care patients receive; we do not believe that a
0 percent performance rate or reporting of measures that do not meet
data completeness requirements achieves that aim. As such, we intend to
revisit these policies and apply more rigorous standards moving
forward. We will revisit these policies in future years.
Comment: One commenter requested that CMS ensure that all claims
measures meet a reliability threshold of 0.80 at the individual
physician level.
Response: We believe that measures with a reliability of 0.4 with a
minimum attributed case size of 20 meet the standards for being
included as quality measures within the MIPS program. We aim to measure
quality performance for as many clinicians as possible, and limiting
measures to reliability of 0.7 or 0.8 would result in fewer individual
clinicians with quality performance category measures. In addition, a
0.4 reliability threshold ensures moderate reliability for most MIPS
eligible clinicians or group practices that are being measured on
quality.
Comment: One commenter also opposed limiting the number of measures
that MIPS eligible clinicians can submit that are not able to be scored
due to not meeting the required case minimum, since certain specialties
may not have sufficient measures to report due to the few that are
applicable and available to them.
Response: We will not be limiting the number of measures that MIPS
eligible clinicians can submit that are below the case minimum
requirement in the transition year. We may revisit this approach in
future years.
Comment: One commenter recommended that CMS finalize the proposal
whereby physicians are not penalized in scoring when they report
measures but do not have the required case minimum.
Response: We are modifying our proposed approach. Under our
proposed approach, measures that were below the case minimum
requirement, would have not been scored. Our revised approach is that,
for the transition year, measures that do not meet the case minimum
requirement, lack a benchmark or do not meet the data completeness
criteria will not be scored and instead, MIPS eligible clinicians will
receive 3 points for submitting the measure.
After consideration of the comments, we are finalizing case minimum
policies for measures at Sec. 414.1380(b)(1)(iv) and (v). For the
quality performance category measures, we will use the following case
minimum requirements: 20 cases for all quality measures, with the
exception of the all-cause hospital readmissions measure, which has a
minimum of 200 cases. We reiterate that we will only apply the all-
cause readmission measure to groups of 16 or more MIPS eligible
clinicians that meet the case minimum requirement.
Based on public comments, we are revising our proposed policy for
all measures, except CMS Web Interface measures and administrative
claims-based measures, that are submitted but for which performance
cannot be reliably measured because the measures do not meet the
required case minimum, do not have a benchmark, or do not meet the data
completeness requirement, benchmark or is below the data completeness
requirement, it will receive a floor of 3 points. At Sec.
414.1380(b)(1)(vii), for the transition year, we finalize that if the
measure is submitted but is unable to be scored because it does not
meet the required case minimum, does not have a benchmark, or does not
meet the data completeness requirement, the measure will receive a
score of 3 points.
We are finalizing our proposed policy for CMS Web Interface
measures that are submitted but for which performance cannot be
reliably measured because the measures do not meet the required case
minimum or do not have a benchmark. At Sec. 414.1380(b)(1)(viii), we
are finalizing that the MIPS eligible clinician will receive
recognition for submitting such measures, but the measure will not be
included for MIPS quality performance category scoring. CMS Web
Interface measures that do not meet the data completeness requirement
will receive a score of 0. We are also finalizing our proposed policy
for administrative claims-based measures for which performance cannot
be reliably measured because the measures do not meet the required case
minimum or do not have a benchmark. For the transition year, this
policy would only apply to the readmission measure since the only
administrative claims-based quality measure is the readmission measure.
However, this policy will apply to additional administrative claims-
based measures that are added in future years. At Sec.
414.1380(b)(1)(viii), we are finalizing that such measures will not be
included in the MIPS eligible clinician's quality performance category
score. We note that the data completeness requirement does not apply to
administrative claims-based measures. Overall, at Sec. 414.1380, we
will provide points for all submitted
[[Page 77289]]
measures, but only a subset of measures receive points based on
performance against a benchmark. Table 17 summarizes our scoring rules
and identifies two classes of measures for scoring purposes.\28\
---------------------------------------------------------------------------
\28\ We classified the measures for simplicity in discussing
results. Name of classification subject to change.
Table 17--Quality Performance Category: Scoring Measures Based on
Performance for Performance Period 2017
------------------------------------------------------------------------
Measure type Description Scoring rules
------------------------------------------------------------------------
Class 1--Measure can Measures that were submitted Receive 3
be scored based on or calculated that met the to 10 points based
performance following criteria: on performance
(1) The measure has a compared to the
benchmark; \29\ benchmark.
(2) Has at least 20 cases;
and
(3) Meets the data
completeness standard
(generally 50 percent.)
Class 2--Measure Measures that were submitted, Receive 3
cannot be scored but fail to meet one of the points.
based on class 1 criteria. Measures Note: This
performance and is either Class 2 measure
instead assigned a (1) Do not have a benchmark, policy does not
3-point score. (2) Do not have at least 20 apply to CMS Web
cases, or Interface measures
(3) Measure does not meet and administrative
data completeness criteria. claims-based
measures.
------------------------------------------------------------------------
Generally, if we identify issues or circumstances that impact the
reliability or validity of a class 1 measure score, we will recognize
that the measure was submitted, but exclude that measure from scoring.
Instead, MIPS eligible clinicians will receive a flat 3 points for
submitting the measure. However, if we identify issues or circumstances
that impact the reliability or validity of a class 1 measure that is a
CMS Web Interface or administrative claims-based measure, we will
exclude the measure from scoring. For Web Interface measures, we will
recognize that the measure had been submitted. For Web Interface
measures, as discussed in section II.E.6.a.(2)(g)(ii) of the final rule
with comment period, and administrative claims-based measures, we will
not score these measures. For the transition year, we note that the
readmission measure is the only administrative claims-based quality
measure. However, this policy will apply to additional administrative
claims-based measures that are added in future years.
---------------------------------------------------------------------------
\29\ Benchmarks needed 20 reporters with at least 20 cases meet
data completeness and performance greater than 0 percent.
---------------------------------------------------------------------------
We provide below examples of our new scoring approach. For
simplicity, the examples not only explain how the to calculate the
quality performance category score, but also how the quality
performance category score contributes to the final score as described
in section II.E.6.b of this final rule with comment period, assuming a
quality performance category weight of 60 percent. We use the term
weighted score to represent a performance category score that is
adjusted for the performance category weight.
If the MIPS eligible clinician, as a solo practitioner, scored 10
out of 10 on each of five measures submitted, one of which was an
outcome measure, and had one measure that was below the required case
minimum, the MIPS eligible clinician would receive the following
weighted score for the quality performance category: (5 measures x 10
points) + (1 measure x 3 points) or 53 out of 60 possible points x 60
(weight of quality performance category) = 53 points toward the final
score. Similarly, if the MIPS eligible clinician, as a solo
practitioner, scored 10 out of 10 on each of five measures submitted,
one of which was an outcome measure, but failed to submit a sixth
measure even though there were applicable measures that could have been
submitted, the MIPS eligible clinician would receive the following
weighted score in the quality performance category: (5 measures x 10
points) + (1 measure x 0 points) or 50 out of 60 possible points x 60
(weight of quality performance category) = 50 points toward the final
score.
We also provide examples of instances where MIPS eligible
clinicians either do not have 6 applicable measures or the applicable
specialty set has less than six measures.
For example, if a specialty set only has 3 measures or if a MIPS
eligible clinician only has 3 applicable measures, then, in both
instances, the total possible points for the MIPS eligible clinician is
30 points (3 measures x 10 points). If the MIPS eligible clinician
scored 8 points on each of the 3 applicable measures submitted, one of
which was an outcome measure, then the MIPS eligible clinician would
receive the following weighted score in the quality performance
category: (3 measures x 8 points) or 24 out of 30 possible points x 60
(weight of quality performance category) = 48 points toward the final
score.
(d) Scoring for MIPS Eligible Clinicians That Do Not Meet Quality
Performance Category Criteria
Section II.E.5.b. of the proposed rule outlined our proposed
quality performance category criteria for the different reporting
mechanisms. The criteria vary by reporting mechanism, but generally we
proposed to include a minimum of six measures with at least one cross-
cutting measure (for patient facing MIPS eligible clinicians) (Table C
of the proposed rule at 81 FR 28447) and an outcome measure if
available. If an outcome measure is not available, then we proposed
that the eligible clinician would report one other high priority
measure (appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) in lieu of an outcome
measure. We proposed that MIPS eligible clinicians and groups would
have to select their measures from either the list of all MIPS Measures
in Table A of the Appendix in the proposed rule (81 FR 28399) or a set
of specialty specific measures in Table E of the Appendix in the
proposed rule (81 FR 28460). As discussed in section II.E.5.b.(3) of
this final rule with comment period, we are not finalizing the
requirement for a cross-cutting measure. As discussed in II.E.5.b.(6)
of this final rule with comment period, we are also not including two
of the three population measures in the scoring.
We noted that there are some special scenarios for those MIPS
eligible clinicians who select their measures from the Specialty Sets
(Table E of the
[[Page 77290]]
Appendix in the proposed rule at 81 FR 28460) as discussed in section
II.E.5.b. of the proposed rule (81 FR 28186).
For groups using the CMS Web Interface and MIPS APMs, we proposed
to have different quality performance category criteria described in
sections II.E.5.b. and II.E.5.h. of the proposed rule (81 FR 28187 and
81 FR 28234). Additionally, as described in section II.E.5.b of the
proposed rule, we also proposed to score MIPS eligible clinicians on up
to three population-based measures.
Previously in PQRS, EPs had to meet all the criteria or be subject
to a negative payment adjustment. However, we proposed that MIPS
eligible clinicians receive credit for measures that they report,
regardless of whether or not the MIPS eligible clinician meets the
quality performance category submission criteria. Section
1848(q)(5)(B)(i) of the Act provides that under the MIPS scoring
methodology, MIPS eligible clinicians who fail to report on an
applicable measure or activity that is required to be reported shall be
treated as receiving the lowest possible score for the measure or
activity; therefore, for any MIPS eligible clinician who does not
report a measure required to satisfy the quality performance category
submission criteria, we proposed that the MIPS eligible clinician would
receive zero points for that measure. For example, a MIPS eligible
clinician who is able to report on six measures, yet reports on four
measures, would receive two ``zero'' scores for the missing measures.
However, we proposed that MIPS eligible clinicians who report a measure
that does not meet the required case minimum would not be scored on the
measure but would also not receive a ``zero'' score.
We also noted that if MIPS eligible clinicians are able to submit
measures that can be scored, we want to discourage them from continuing
to submit the same measures year after year that cannot be scored due
to not meeting the required case minimum. Rather, to the fullest extent
possible, MIPS eligible clinicians should select measures that would
meet the required case minimum. We sought comment on any safeguards we
should implement in future years to minimize any gaming attempts. For
example, if the measures that a MIPS eligible clinician submits for a
performance period are not able to be scored due to not meeting the
required case minimum, we sought comment on whether we should require
these MIPS eligible clinicians to submit different measures with
sufficient cases for the next performance period (to the extent other
measures are applicable and available to them).
We proposed that MIPS eligible clinicians who report a measure
where there is no benchmark due to less than 20 MIPS eligible
clinicians reporting on the measure would not be scored on the measure
but would also not receive a ``zero'' score. Instead, these MIPS
eligible clinicians would be scored according to the following example:
A MIPS eligible clinician who submits six measures through a group of
10 or more clinicians, with one measure lacking a benchmark, would be
scored on the five remaining measures and the three population-based
measures based on administrative claims data
We stated our intent to develop a validation process to review and
validate a MIPS eligible clinician's inability to report on the quality
performance requirements as proposed in section II.E.5.b. of the
proposed rule. We anticipate that this process would function similar
to the Measure Applicability Validity (MAV) process that occurred under
PQRS, with a few exceptions. First, the MAV process under PQRS was a
secondary process after an EP was determined to not be a satisfactory
reporter. Under MIPS, we intend to build the process into our overall
scoring approach to reduce confusion and burden on MIPS eligible
clinicians by having a separate process. Second, as the requirements
under PQRS are different than those proposed under MIPS, the process
must be updated to account for different measures and different quality
performance requirements. More information on the MAV process under
PQRS can be found at http://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/PQRS/Downloads/2016_PQRS_MAV_ProcessforClaimsBasedReporting_030416.pdf. We requested
comments on these proposals.
The following is a summary of the comments we received regarding
our proposal to score MIPS eligible clinicians that do not meet quality
performance category criteria.
Comment: Commenters recommended that we clarify the proposed
process to identify whether groups have fewer than 6 applicable
measures to report and wanted real time notification of whether they
passed. One commenter requested clarification on how proposed specialty
sets will be scored, given that many have less than the required number
of measures and do not include a required outcome or high priority
measure. A few commenters recommended reinstating the MAV process. A
few commenters recommended that CMS should engage the public in
developing the MAV process and provide the public with a formal
opportunity to provide input into proposed clusters and the overall MAV
algorithm. One commenter recommended that CMS consider both the
availability of measures based on subspecialty or patient condition and
also submission mechanism. The commenter was concerned that due to the
requirement to use only one submission mechanism per performance
category, a MIPS eligible clinician or group may be prevented from
achieving all measure requirements. The commenter believed CMS should
not penalize a clinician for failing to report a measure because it is
unavailable via the submission mechanism selected. Another commenter
requested that CMS compare the scores of primary care and specialty
care clinicians and assess whether the difference is due to a lack of
available measures.
Response: The MIPS validation process will vary by submission
mechanism. For claims and registry submissions, we plan to use the
cluster algorithms from the current MAV process under PQRS to identify
which measures an MIPS eligible clinician is able to report. For QCDRs,
we do not intend to establish a validation process. We expect MIPS
eligible clinicians that enroll in QCDRs have sufficient meaningful
measures that the MIPS eligible clinician is able to report. For the
EHR submissions, we know that MIPS eligible clinicians may not have six
measures relevant within their EHR. If there are not sufficient EHR
measures to meet the full specialty set requirements or meet the
requirement to submit 6 measures, the MIPS eligible clinician should
select a different submission mechanism in order to meet the quality
performance category requirements of submitting measures in a specialty
set or six applicable measures. MIPS eligible clinicians should work
with their EHR vendors to incorporate applicable measures as feasible.
As discussed in section II.E.6.a.(1) of this final rule with comment
period, if a MIPS eligible clinician submits via multiple mechanisms we
would calculate two quality performance category scores and take the
highest score. For the CMS Web Interface, MIPS eligible clinicians are
attributed beneficiaries on a defined population that is appropriate
for the measures, so there is no need for additional validation. Given
the number of choices for submitting quality data, we anticipate MIPS
eligible clinicians will be able to find a submission mechanism that
meets the MIPS
[[Page 77291]]
submission requirements. We strongly encourage MIPS eligible clinicians
to select the submission mechanism that has 6 measures that are
available and appropriate to their specialty and practice type.
Comment: Several commenters made recommendations on our request for
comments on preventing gaming. Some commenters recommended an
attestation or statement of explanation when a practice or provider
chooses to submit a quality measure that does not meet the required
case minimum. One commenter recommended that CMS require attestation
from physicians who claim they are unable to report on quality
performance requirements and that CMS provide very clear directions
about the requirements in order to prevent confusion and inadvertent
wrongdoing. Another commenter encouraged CMS to implement a strict
validation and review process and to establish safeguards, such as a
limit on the amount of measures that can be reported below the case
minimum. One commenter requested clarification on whether CMS will
allow clinicians to remain within their applicable measure set in such
a scenario (that is, not force clinicians to report measures outside of
their applicable measure set just to meet case minimum thresholds) and
was concerned about the idea of prohibiting subsequent reporting on
measures that did not meet case minimums. One commenter objected to our
request for comments on how to prevent `gaming' stating that for CMS to
give such time and consideration to potential gaming of the system is
insulting to America's physicians. The commenter believed that such
focus on gaming leads to unnecessarily complicated programs. The
commenter recommended that CMS acknowledge in the final rule with
comment period that the vast majority of Medicare physicians are not
intending to ``game'' the system or avoid meeting CMS program
requirements and are instead attempting to learn about a new payment
system that could go into effect in less than 6 months. The commenter
also recommended that the resources currently earmarked for the purpose
of identifying potential gaming should be directed towards helping MIPS
eligible clinicians, from both large and small practices, understand
the regulatory requirements, correctly report data, and identify areas
and methods in which they can improve their scores.
Response: For the transition year, we are encouraging participation
in MIPS and will not be finalizing any policies to prevent gaming. We
agree with the commenter in that we believe the vast majority of MIPS
eligible clinicians do not intend to game the system. Rather, we
believe that clinicians are interested in working with us to learn the
details of the new payment system established under the Quality Payment
Program and to provide high quality care to Medicare beneficiaries. We
must ensure, however, that payment under this new system is based on
valid and accurate measurement and scoring, and identify ways to
prevent any potential gaming that could occur in the program. We will
continue to monitor MIPS eligible clinician submissions and may propose
additional policies through future rulemaking as appropriate.
Comment: Commenters recommended that we hold EHR vendors
accountable for EHR certification and measure availability and take
this into account when scoring a MIPS eligible clinician on low case
volume.
Response: We do currently require that EHR vendors be certified to
a minimum of 9 eCQMs as is required for reporting under the current
PQRS and EHR Incentive Programs. In the 2015 EHR Incentive Programs
final rule, CMS required EPs, eligible hospitals, and CAHs to use the
most recent version of an eCQM for electronic reporting beginning in
2017 (80 FR 62893). We are maintaining this policy for the electronic
reporting bonus under MIPS and encourage MIPS eligible clinicians to
work with their EHR vendors to ensure they have the most recent version
of the eCQM. CMS will not accept an older version of an eCQM for a
submission for the MIPS program for the quality category or the end-to-
end electronic reporting bonus within that category. Additionally,
measures that are submitted below the required case minimum will
receive 3 points but will not be scored on performance for the 2017
performance period
After consideration of the comments, we are finalizing at Sec.
414.1380(b)(1)(vi) that MIPS eligible clinicians who fail to report a
measure that is required to satisfy the quality performance category
submission criteria will receive zero points for that measure. Further,
we are finalizing implementation of a validation process for claims and
registry submissions to validate whether MIPS eligible clinicians have
six applicable and available measures, whether an outcome measure is
available or another other high priority measure if an outcome measure
is not available.
However, we are not finalizing our proposal that MIPS eligible
clinicians who report a measure that does not meet the required case
minimum, the data completeness criteria, or for which there is no
benchmark due to less than 20 MIPS eligible clinicians reporting the
measure, would not receive any points for submission and would not be
scored on performance against a benchmark. Rather, as discussed in
section II.E.6.a.(2)(c) of this final rule with comment period, for
``class 2'' measure, as defined in Table 17, that are submitted, but
unable to be scored, we will add a 3-point floor for all submitted
measures for the transition year. That is, if a MIPS eligible clinician
submits a ``class 2'' measure, as defined in Table 17 we will assign 3
points to the MIPS eligible clinician for submitting that measure
regardless of whether the measure meets the data completeness
requirement or required case minimum requirement or whether the measure
has a benchmark for the transition year. For example, a MIPS eligible
clinician who is a solo practitioner could submit 6 measures as
follows: 2 measures (one of which is an outcome measure) with high
performance, scoring 10 out of 10 on each of these measures, 1 measure
that lacks minimum case size, 1 measure that lacks a benchmark, 1
measure that does not meet the data completeness requirement and 1
measure with low performance. In this case, the MIPS eligible clinician
would receive 32 out of 60 possible points in the quality performance
category (2 measures x 10 points plus 4 measures x 3 points). We will
revisit this policy in future years.
(e) Incentives To Report High Priority Measures
Consistent with other CMS value-based payment programs, we proposed
that MIPS scoring policies would emphasize and focus on high priority
measures that impact beneficiaries. These high priority measures are
defined as outcome, appropriate use, patient safety, efficiency,
patient experience and care coordination measures; see Tables A through
D of the Appendix in the proposed rule (81 FR 28399-28460) for these
measures. We proposed these measures as high priority measures given
their critical importance to our goals of meaningful measurement and
our measure development plan. We note that many of these measures are
grounded in NQS domains. For patient safety, efficiency, patient
experience and care coordination measures, we refer to the measures
within the respective NQS domains and measure types. For outcomes
measures, we include both outcomes measures and intermediate outcomes
measures. For appropriate use measures, we have noted which measures
fall within this category in Tables A through D and provided
[[Page 77292]]
criteria for how we identified these measures in section II.E.5.b. of
the proposed rule. For non-MIPS measures reported through QCDRs, we
proposed to classify which measures are high priority during the
measure review process.
We proposed scoring adjustments to create incentives for MIPS
eligible clinicians to submit high priority measures and to allow these
measures to have more impact on the total quality performance category
score.
We proposed to create an incentive for MIPS eligible clinicians to
voluntarily report additional high priority measures. We proposed to
provide 2 bonus points for each outcome and patient experience measure
and 1 bonus point for other high priority measures reported in addition
to the one high priority measure (an outcome measure, but if one is not
available, then another high priority measure) that would already be
required under the proposed quality performance category criteria. For
example, if a MIPS eligible clinician submitted 2 outcome measures, and
two patient safety measures, the MIPS eligible clinician would receive
2 bonus points for the second outcome measure reported and 2 bonus
points for the two patient safety measures. The MIPS eligible clinician
would not receive any bonus points for the first outcome measure
submitted since that is a required measure. We selected 2 bonus points
for outcome measures given the statutory requirements under section
1848(q)(2)(C)(i) of the Act to emphasize outcome measures. We selected
2 bonus points for patient experience measures given the importance of
patient experience measures to our measurement goals. We selected 1
bonus point for all other high priority measures given our measurement
goals around each of those areas of measurement. We believe the number
of bonus points provides extra credit for submitting the measure, yet
would not mask poor performance on the measure. For example, a MIPS
eligible clinician with poor performance receives only 3 points for
performance for a particular high priority measure. The bonus points
would increase the MIPS eligible clinician's points to 4 (or 5 if the
measure is an outcome measure or patient experience measure), but that
amount is far less than the 10 points a top performer would receive. We
noted that population-based measures would not receive bonus points.
We noted that a MIPS eligible clinician who submits a high priority
measure but had a performance rate of 0 percent would not receive any
bonus points. MIPS eligible clinicians would only receive bonus points
if the performance rate is greater than zero. Bonus points are also
available for measures that are not scored (not included in the top 6
measures for the quality performance category score) as long as the
measure has the required case minimum and data completeness. We believe
these qualities would allow us to include the measure in future
benchmark development.
Groups submitting data through the CMS Web Interface, including
MIPS APMs that report through the CMS Web Interface, are required to
submit a set of predetermined measures and are unable to submit
additional measures (other than the CAHPS for MIPS survey). For that
submission mechanism, we proposed to apply bonus points based on the
finalized set of measures. We would assign two bonus points for each
outcome measure (after the first required outcome measure) and for each
patient experience measure. We would also have one additional bonus
point for each other high priority measure (patient safety, efficiency,
appropriate use, care coordination). We believe MIPS eligible
clinicians or groups should have the ability to receive bonus points
for reporting high priority measures through all submission mechanisms,
including the CMS Web Interface. In this final rule with comment
period, we will publish how many bonus points the CMS Web Interface
measure set would have available based on the final list of measures
(See Table 21).
We proposed to cap the bonus points for the high priority measures
(outcome, appropriate use, patient safety, efficiency, patient
experience, and care coordination measures) at 5 percent of the
denominator of the quality performance category score. Tables 19 and 20
of the proposed rule (81 FR 28257-28258) illustrated examples of how to
calculate the bonus cap. We also proposed an alternative approach of
capping bonus points for high priority measures at 10 percent of the
denominator of the quality performance category score. Our rationale
for the 5 percent cap was that we do not want to mask poor performance
by allowing a MIPS eligible clinician to perform poorly on a measure
but still obtain a high quality performance category score by
submitting numerous high priority measures in order to obtain bonus
points; however, we were also concerned that 5 percent may not be
enough incentive to encourage reporting. We requested comment on the
appropriate threshold for this bonus cap.
The following is a summary of the comments we received regarding
our proposal to provide bonus points for high priority quality
measures.
Comment: Several commenters supported our proposal to award two
bonus points for reporting additional outcome or patient experience
measures and one bonus point for reporting any other high priority
measure, indicating that rewarding bonus points would provide an
additional incentive to report on measures which were of higher value
to patients.
Response: We appreciate the support of the commenters for our
proposals. We are finalizing the proposal to assign two bonus points
for reporting additional outcome or patient experience measures and one
bonus point for reporting any other high priority measure.
Comment: Some commenters recommended that outcome, patient
experience, and other high priority measures not be required for
reporting but should be awarded bonus points if they are reported,
including the first high priority measure reported.
Response: Our long term goal for the Quality Payment Program is to
move reporting towards high priority measures. We believe that our
proposal to require an outcome measure or another high priority measure
if an outcome measure is not available presents a balanced approach
that will encourage more reporting of these measures. We are concerned
that the use of these measures would be much more limited and selective
if reporting of one of these measures were not required.
Comment: A number of commenters expressed concern with the proposal
to award bonus points for the reporting of additional high priority
measures because many specialties do not have sufficient outcome,
patient experience or other high priority measures to receive bonus
points. Some commenters expressed concern about the future development
of outcome measures due to lack of available clinical evidence and poor
risk adjustment.
Response: By awarding bonus points for the reporting of additional
high priority measures, we are encouraging a movement towards stronger
development of measures that are aligned with our measurement goals. We
encourage stakeholders who are concerned about a lack of high priority
measures to consider development of these measures and submit them for
future use within the program. In addition, our strategy for
identifying and developing meaningful outcome measures are in the MACRA
quality measure development plan, authorized by section 102 of the
MACRA (https://
[[Page 77293]]
www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-
Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf).
The plan references how we plan to consider evidence-based research,
risk adjustment, and other factors to develop better outcome measures.
Comment: A commenter recommended that CMS identify a small number
of high priority measures including patient-reported outcome measures
that would be tested on a regional scale before being implemented
nationally. This commenter recommended that these proposed high
priority measures should be vetted with other stakeholders.
Response: We believe that our proposed measure set provides
flexibility for clinicians in determining which measures to report. All
measures go through a review process that includes public comment as
part of the rulemaking process, and most measures are reviewed by the
NQF-convened MAP as part of CMS' pre-rulemaking process.
Comment: A commenter recommended that CMS move toward establishing
core sets of high priority measures by specialty or subspecialty. This
would enable consumers and purchasers to make direct comparisons of
similar clinicians with assurance that they are all being assessed
against a consistent and standardized set of important quality
indicators.
Response: As part of this rule, we have finalized specialty measure
sets that may simplify the measure selection process. We continue to
encourage the development of outcome and other high priority measures
that may be reported and relevant to all specialties of medicine.
Comment: A commenter supported the concept of incentivizing
clinicians to submit high priority measures given that they can be more
challenging; however, this commenter sought clarification on which
measures submitted by QCDRs would be considered high priority. This
same commenter indicated that QCDRs should be allowed to determine the
most appropriate classification for each of its measures, including
which measures should be considered high priority, subject to the QCDR
measure approval process.
Response: We define high priority to measures as those based on the
following criteria: outcome, appropriate use, patient safety,
efficiency, patient experience and care coordination measures. For non-
MIPS measures reported through QCDRs, we proposed to classify which
measures are high priority during the measure review process (81 FR
28186). If the measure is endorsed by NQF as an outcome measure, we
will take that designation into consideration. If we decide to assign
these domains to QCDR measures, we will add the high priority
designation to QCDR measures accordingly. Although we may enlist the
assistance and consultation of the QCDR in assessing high priority
measures, we would still make the final high priority designation.
Comment: One commenter requested clarity on measures which are
identified as a high priority and noted that, based on past reporting
statistics, certain high-priority measures may be classified as topped
out. The commenter requested clarification on what this means for the
MIPS eligible clinician's score.
Response: Any high priority measure that is topped out will still
be eligible for bonus points. We think incentives should remain to
report high priority measures, even topped out measures, as additional
reporting makes for a more comprehensive benchmark and can help confirm
that the measure is truly topped out. Also, as discussed in section
II.E.6.a.(2)(c) of this final rule with comment period, we are not
implementing any special scoring for topped out measures in year 1 of
MIPS. Thus, the score for that measure will not be reduced by our
proposed mid-cluster approach for topped out measures in CY 2017. We
will not modify the benchmark methodology for any topped out measures
for the CY 2017 performance period. We will modify the benchmark
methodology for topped out measures beginning with the CY 2018
performance period, provided that it is the second year the measure has
been identified as topped out. We will propose options for scoring
topped out measures through future rulemaking.
Comment: One commenter supported our proposal to award 2 bonus
points for outcome measures but recommended that only 1 bonus point be
awarded for the reporting of patient experience measures.
Response: We believe that patient experience measures align with
our measurement goals and for that reason should be awarded the same
number of bonus points as outcome measures.
Comment: One commenter requested clarification as to whether a MIPS
eligible clinician can earn bonus points if the MIPS eligible clinician
does not report all 6 measures due to lack of available measures.
Response: The MIPS eligible clinician can receive bonus points on
all high priority measures submitted, after the first required high
priority measure submitted, assuming these measures meet the minimum
case size and data completeness requirements even if the MIPS eligible
clinician did not report all 6 required measures due to lack of
available measures.
Comment: One commenter recommended that CMS pursue additional
approaches to the quality performance category to advance health equity
and reward MIPS eligible clinicians who promote health equity
including: adding measures stratified by race and ethnicity or other
disparity variable, and developing and adding a stand-alone health
equity measure as a high priority measure for which clinicians can
receive a bonus point.
Response: Eliminating racial and ethnic disparities to achieve an
equitable health care system is one of the four foundational principles
listed in the CMS Quality Strategy. We refer readers to the MACRA
quality measure development plan, authorized by section 102 of the
MACRA (https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/Value-Based-Programs/MACRA-MIPS-and-APMs/Final-MDP.pdf). The plan outlines the many ways we look to identify, measure
and reduce disparities. We will consider in future rulemaking the
commenter's proposed options to advance health equity and reward MIPS
eligible clinicians who promote health equity.
After consideration of the comments, we are finalizing at Sec.
414.1380(b)(1)(xiii) our proposal to award 2 bonus points for each
outcome or patient experience measure and 1 bonus point for each other
high priority measure that is reported in addition to the 1 high
priority measure that is already required to be reported under the
quality performance category submission criteria. We will revisit this
policy in future years. High priority measures are defined as outcome,
appropriate use, patient safety, efficiency, patient experience and
care coordination measures, as identified in Tables A through D in the
Appendix of this final rule with comment period. For the CMS Web
Interface, we will apply bonus points based on the finalized set of
measures reportable through that submission mechanism. MIPS eligible
clinicians will only receive bonus points if they submit a high
priority measure with a performance rate that is greater than zero,
provided that the measure meets the case minimum and data completeness
requirements. We believe that this will encourage stronger reporting of
those measures that are
[[Page 77294]]
more closely aligned to our measurement goals.
The following is a summary of the comments we received regarding
our proposal for establishing a cap on bonus points awarded for the
reporting of additional high priority measures:
Comment: Some commenters opposed our proposal to cap bonus points
for high priority measures. Others recommended that the cap be
increased from 5 percent of the denominator as proposed to 10 percent
of the denominator as in our alternative option. Those who opposed the
cap on bonus points at 5 percent of the denominator believe that the 5
percent cap was too low to encourage the reporting of high-priority
measures. One commenter requested that CMS share a data analysis
demonstrating the necessity for a cap. Others cautioned that quality
measures and the available bonus points may be selected, not for the
benefit of the clinician or patient, but only to obtain the bonus
points, and that this defeats the purpose of true quality measurement
for quality patient care.
Response: After consideration of the comments, we believe
increasing the cap on bonus points to 10 percent of the quality score
denominator for high priority measures provides a strong incentive to
report these measures while still providing a necessary safeguard to
avoid masking poor performance. While our long term goals for the
program are to move towards the use of outcome and other high priority
measures as much as possible, we also acknowledge the important role
that other measures play at this time. We remain concerned, however,
that without a cap in place, or with a cap that is too high, we could
incentivize the reporting of additional measures over a focus on
performance in relevant clinical areas, and mask poor performance with
higher bonus points. We understand commenters' concern that quality
measures and the available bonus points may be selected, not for the
benefit of the clinician or patient, but only to obtain the bonus
points. We have identified high priority measures to encourage
meaningful measurement in each of the high priority areas and believe
MIPS eligible clinicians who report on these measures will continue to
work to improve their performance in these areas accordingly. At the
same time, we will continue to monitor reporting trends and revisit our
policies on bonus points for high priority measures as the program
develops in future years.
Comment: Some commenters were concerned that at a 5 percent cap,
CMS may be incentivizing the reporting of a high priority measure over
high performance on another measure. Some commenters recommended that
CMS defer awarding bonus points for high priority measures to reduce
the complexity of the scoring methodology within the quality
performance category.
Response: We do not believe that raising the bonus cap of 10
percent will mask poor performance. Instead, we believe it will
encourage additional reporting of these outcome and high priority
measures. We note that we will not assign bonus points if an additional
high priority measure is reported with a zero performance rate or if
the reported measure does not meet the case minimum or data
completeness requirements. We believe that this approach will avoid the
issue that the commenters have identified. We will closely monitor
reporting trends to ensure that this balance is maintained.
Comment: One commenter recommended that we cap the bonus points
that CMS Web Interface users can earn as the CMS Web Interface includes
several high priority measures.
Response: We believe the bonus points should be applied
consistently across all submission mechanisms. Groups who report via
the CMS Web Interface submit data on a pre-defined set of measures and
do not have the ability to report on additional measures through
another submission mechanism (other than the CAHPS for MIPS survey). We
note that CMS Web Interface users are subject to the same 10 percent
cap that all other MIPS eligible clinicians have, so CMS Web Interface
users will not receive any additional credit compared to other MIPS
eligible clinicians. We will closely monitor reporting trends to
address commenter's concern that Web Interface users do not receive an
unfair advantage by having more high priority measures available to
them than other MIPS eligible clinicians.
After consideration of the comments, we are finalizing at Sec.
414.1380(b)(1)(xiii) a modification to the proposed high priority
measure cap. Specifically, we are increasing the cap for high priority
measures from 5 percent to 10 percent of the denominator (total
possible points the MIPS eligible clinician could receive in the
quality performance category) \30\ of the quality performance category
for the first 2 years. We believe that this cap protects against
rewarding reporting over performance while still encouraging reporting
of the types of measures which will form the foundation of the future
of the program. In future years, we plan to decrease this cap over
time.
---------------------------------------------------------------------------
\30\ For example, the denominator for a MIPS eligible clinician
who is a solo practitioner would be 60 points if the clinician has
six applicable measures (6 measures x 10 points). If the MIPS
eligible clinician, who is a solo practitioner, only has 5
applicable measures, then the denominator would be 50 points (5
measures x 10 points). A group of 16 or more would have a
denominator of 70 points assuming the group had 6 applicable
measures and enough cases to be scored on the readmission measure (7
measures x 10 points).
---------------------------------------------------------------------------
(f) Incentives To Use CEHRT To Support Quality Performance Category
Submissions
Section 1848(q)(5)(B)(ii) of the Act provides that under the
methodology for assessing the total performance of each MIPS eligible
clinician, the Secretary shall: (1) Encourage MIPS eligible clinicians
to report on applicable measures under the quality performance category
through the use of CEHRT and QCDRs; and (2) for a performance period
for a year, for which a MIPS eligible clinician reports applicable
measures under the quality performance category through the use of
CEHRT, treat the MIPS eligible clinician as satisfying the CQMs
reporting requirement under section 1848(o)(2)(A)(iii) of the Act for
such year. To encourage the use of CEHRT for quality improvement and
reporting on measures under the quality performance category, we
proposed a scoring incentive to MIPS eligible clinicians who use their
CEHRT systems to capture and report quality information.
We proposed to allow one bonus point for each measure under the
quality performance category score, up to a maximum of 5 percent of the
denominator of the quality performance category score if:
The MIPS eligible clinician uses CEHRT to record the
measure's demographic and clinical data elements in conformance to the
standards relevant for the measure and submission pathway, including
but not necessarily limited to the standards included in the CEHRT
definition proposed in Sec. 414.1305;
The MIPS eligible clinician exports and transmits measure
data electronically to a third party using relevant standards or
directly to us using a submission method as defined at Sec. 414.1325;
and
The third party intermediary (for example, a QCDR) uses
automated software to aggregate measure data, calculate measures,
perform any filtering of measurement data, and submit the data
electronically to us using a submission method as defined at Sec.
414.1325.
[[Page 77295]]
These requirements are referred to as ``end-to-end electronic
reporting.''
We note that this bonus would be in addition to the bonus points
for reporting high priority measures. MIPS eligible clinicians would be
eligible for both this bonus option and the high priority bonus option
with separate bonus caps for each option. We also proposed an
alternative approach of capping bonus points for this option at 10
percent of the denominator of the quality performance category score.
Our rationale for the 5 percent cap was that we do not want to mask
poor performance by allowing a MIPS eligible clinician to perform
poorly on a measure but still obtain a high quality performance
category score; however, we were also concerned that 5 percent may not
be enough incentive to encourage end-to-end electronic reporting. We
sought comment on the appropriate threshold for this bonus cap. We
proposed the CEHRT bonus would be available to all submission
mechanisms except claims submissions. This incentive would also be
available for MIPS APMs reporting through the CMS Web Interface (except
in cases where measures are entered manually into the CMS Web
Interface). Specifically, MIPS eligible clinicians who report via
qualified registries, QCDRs, EHR submission mechanisms, and CMS Web
Interface in a manner that meets the end-to-end reporting requirements
may receive one bonus point for each reported measure with a cap as
described. We did not propose to allow this option for claims
submission, because there is no mechanism for MIPS eligible clinicians
to identify the information was pulled using an EHR. This approach
supports and encourages innovative approaches to measurement using the
full array of standards ONC adopts, and the data elements MIPS eligible
clinicians capture and exchange, to support patient care. Thus,
approaches where a qualified registry or QCDR obtains data from a MIPS
eligible clinician's CEHRT using any of the wide range of ONC-adopted
standards and then uses automated electronic systems to perform
aggregation, calculation, filtering, and reporting would qualify each
such measure for the CEHRT bonus point. In addition, measures submitted
using the EHR submission mechanism or the EHR submission mechanism
through a third party would also qualify for the CEHRT bonus.
We requested comment on this proposed approach.
The following is a summary of the comments we received regarding
our proposal to award CEHRT bonus points for end-to-end electronic
submissions.
Comment: Commenters questioned whether the 5 percent cap would
provide a worthwhile incentive. One commenter noted that the potential
bonus points are so diluted that physicians will not be motivated to
navigate the additional complexity of earning a bonus point. Others
supported the higher cap.
Response: We agree with commenters that capping the bonus available
at 5 percent would not provide a sufficient incentive to utilize CEHRT
for reporting in the initial years of the program; Accordingly, we are
finalizing our alternative option that a provider may receive bonus
points up to 10 percent of the denominator of the quality performance
category score for the first 2 years of the program. We intend to
decrease these cap in future years through future notice and comment
rulemaking.
Comment: One commenter recommended giving 2 points, not 1, for the
CEHRT incentive.
Response: We agree with the commenter that the proposed bonus would
not provide a sufficient incentive for MIPS eligible clinicians.
Although we are not increasing the points per-measure that a clinician
can earn by conducting electronic end-to-end reporting, we are
finalizing our alternate option which would cap the bonus a clinician
may earn at 10 percent instead of 5 percent of the denominator of the
quality performance category score.
Comment: A few commenters wanted bonus incentives for use of QCDRs.
Currently, many QCDRs, including specialty registries, cannot obtain
data from CEHRT or support the standards for data submission. The
commenters believed that clinicians should still receive bonus points
if they transfer data from an EHR into their own registry. One
commenter recommended that CMS encourage EHRs to embrace
interoperability so that data transfer can occur between EHR and QCDRs.
Another commenter stated that CMS should also offer bonus points to
clinicians who use a QCDR (regardless of its ties to CEHRT) since QCDRs
in and of themselves represent robust electronic data submission for a
growing number of clinicians.
Response: We appreciate commenters' support for the use of QCDRs.
Under the policy we are finalizing, MIPS eligible clinicians who
capture their data using CEHRT and electronically export and transmit
this data to a QCDR which uses automated software to aggregate measure
data, calculate measures, perform any filtering of measurement data,
and submit the data electronically via a submission method defined at
Sec. 414.1325, would be able to earn a bonus point. Any submission
pathway that involves manual abstraction and re-entry of data elements
that are captured and managed using certified health IT is not end-to-
end electronic quality reporting and is not consistent with the goal of
the bonus. It is, however, important to note that end-to-end electronic
submission is a goal for which bonus points are available, and not a
requirement to achieve maximum performance in the quality performance
category.
Comment: Some commenters supported the proposed bonus points for
the use of certified EHR technology. One commenter agreed with the
inclusion of bonus points to encourage reporting via QCDR and CEHRT,
but was concerned that giving bonus points for reporting via the CMS
Web Interface and via Qualified Registry would not encourage use of
QCDRs and CEHRT, and that giving bonuses for all of these methods would
function as a penalty for those who submit via claims. This commenter
encouraged either only giving bonus points to CEHRT or QCDR-based
submissions or attaching more bonus points to these mechanisms. Another
commenter recommended that CMS encourage the continued uptake of CEHRT
and QCDRs by awarding bonus points for use of those technologies and
not by unfairly penalizing MIPS eligible clinicians that have not yet
adopted them. One commenter appreciated the optional bonus points that
can be awarded for the use of CEHRT, as this is foundational to the
functionality needed for a quality program of this magnitude.
Response: We appreciate commenters' support for the proposed bonus
for use of CEHRT. We want to encourage increased usage of CEHRT and
believe this functionality should be available for qualified registries
and CMS Web Interface as well as EHR and QCDR submission.
Comment: Commenters wanted clarification on how to determine which
measures qualify for end-to-end electronic reporting, as measures
reported through the CMS Web Interface and QCDR may or may not involve
``end-to-end'' electronic reporting. Commenters requested that CMS
consider any measures coming from an electronic source to an electronic
source, following relevant standards, as eligible for the electronic
reporting bonus points. One commenter proposed clarifying our
requirement for ``end-to-end reporting'' as follows: ``in conformance
to the standards relevant for the measure and submission pathway allows
the manner in which
[[Page 77296]]
the specific registry requires the data submission, such as data
derived from an electronic source, which might not be CEHRT, and the
destination is electronic. One commenter noted that many clinicians
will not have end-to-end electronic capability by 2018 for reasons
outside of their control.
Response: The end-to-end electronic reporting bonus point is not
specific to certain CQMs, but would apply in any case where the
submission pathway maintains fully electronic management and movement
of patient demographic and clinical data once it is initially captured
in the eligible clinician's certified health IT. Where a registry is
calculating and submitting the Quality Payment Program-accepted
measures on the MIPS eligible clinician's behalf, this means that: (1)
The MIPS eligible clinician uses certified health IT to capture and
electronically provide to the registry clinical data for the measures,
using appropriate electronic means (for example, through secure access
via API or by electronic submission of QRDA documents); and (2) the
registry uses verifiable software to process the data, calculate, and
report measure results to CMS (in CMS-specified electronic submission
format). In order to qualify for a bonus point, submission via a QCDR
or the CMS Web Interface would need to adhere to these principles. Any
submission pathway that involves manual abstraction and re-entry of
data elements that are captured and managed using certified health IT
is not end-to-end electronic quality reporting and is not consistent
with the goal of the bonus. We understand that not all clinicians may
have end-to-end electronic capabilities immediately, and note that end-
to-end electronic submission is a goal for which bonus points are
available, and not a requirement to successfully participate in MIPS.
We are finalizing policies that offer MIPS eligible clinicians
substantial flexibility and sustain proven pathways for successful
participation across all of the performance categories. As noted by the
commenter, we have, included some pathways to which the end-to-end
electronic reporting bonus points may not apply in 2017. For example,
if a MIPS eligible clinicians submits electronic data to a registry,
but the electronic data is not captured from certified health IT or if
a MIPS eligible clinician uses CEHRT to capture data, but then
calculates measures using chart abstraction and submits the resulting
measures to CMS, then the MIPS eligible clinician would not be eligible
for the end-to-end electronic reporting bonus points. Those MIPS
eligible clinicians who are already successfully reporting quality
measures meaningful to their practice via one of these pathways may
continue to do so, or may of course choose a different pathway, if they
believe the different pathway will offer them a better avenue for
success in MIPS.
Comment: Several commenters requested that CMS create incentives to
make CEHRT more flexible because many registries rely on both automated
and manual data entries. Commenters were concerned that most EHRs do
not support all the necessary data elements for advanced quality
measures or analytics and require hybrid approaches to data collection,
but that other electronic submissions have that data. The commenters
believed that CMS should reward eligible clinicians for utilizing
registries, leveraging electronic capture, reporting where it is
feasible, and using alternative methods including manual data entry.
One commenter wanted to incorporate use of an EHR with a registry
system to minimize double reporting and documentation.
Response: We are finalizing policies that offer MIPS eligible
clinicians substantial flexibility and sustain proven pathways for
successful participation. For purposes of the end-to-end electronic
reporting bonus point, the pathway should maintain fully electronic
management and movement of data once it is initially captured in the
MIPS eligible clinician's health IT. Standards-based, interoperable
methods for managing quality measurement data are essential for
improving the value of measures to MIPS eligible clinicians while
reducing these clinicians' data-handling burdens. We would expect the
elements of a hybrid measure that use essential patient demographic and
clinical data normally managed in CEHRT or other certified health IT
for care delivery and documentation (for example, Common Clinical Data
Set elements) could be made available to the registry using electronic
means. Electronic means would include transmission in any Clinical
Document Architecture format supported by the CEHRT, or an
appropriately secure API.
We recommend and encourage all registries to pursue standards-
based, fully electronic methods for accurately extracting and importing
data from other electronic sources, in addition to data supported by
CEHRT and other ONC-Certified Health IT, as appropriate to their
measures. However, we recognize that for some types of measures some
supplementation of the data normally recorded in EHRs in the course of
care may in the near future still require registries to continue
alternate, including manual, means of harvesting the data elements not
yet practically available using electronic means. In future years, we
anticipate evolving data standards and data aggregation and management
services infrastructure, including robust registries capable of
seamlessly aggregating and analyzing data across multiple electronic
types and sources, will eventually eliminate the burden of manual
processes including abstraction.
Comment: One commenter noted that utilizing the CMS Web Interface
would involve abstraction and therefore not truly be completely
electronic, and recommended that the bonus point for ``end to end''
quality measure submission be applied only when data is submitted from
the CEHRT to CMS. Another commenter noted the proposed rule does not
address whether data scrubbing is allowed when the MIPS eligible
clinician is receiving bonus points for using these methods. The
commenter believed data scrubbing is necessary to improve the accuracy
of quality measures and recommends that CMS clarify that data scrubbing
does not nullify bonus points for data submission.
Response: We are finalizing our proposed policy that the CEHRT
bonus would be available for groups using CMS Web Interface for
measures submitted in a manner that meets the end-to-end reporting
requirements. CMS Web Interface users may receive one bonus point for
each reported measure with a cap of 10 percent of the denominator of
the quality performance category. For CMS Web Interface users, we
define end-to-end electronic reporting as cases where users upload data
that has been electronically exported or extracted from EHRs,
electronically calculated, and electronically formatted into a CMS-
specified file that is then electronically uploaded via the Web
Interface as opposed to cases where measures are entered manually into
the CMS Web Interface.
Any submission pathway that involves manual abstraction and re-
entry of data elements that are captured and managed using certified
health IT is not end-to-end electronic quality reporting and is not
consistent with the goal of bonus. Thus, the bonus points would not
apply to measures entered manually into the CMS Web Interface, though
those measurements would be included in the MIPS eligible clinician's
scoring for the performance category.
We do not believe limiting the bonus points to the relatively small
number of measures that we will be able to accept directly from CEHRT
for the 2017
[[Page 77297]]
performance period would be the best way to recognize and encourage
development of other standards-based, interoperable methods for
managing quality measurement data. If a MIPS eligible clinician finds
the measures most meaningful to their practice in a registry, and makes
patient clinical and demographic data captured and managed using
certified health IT available to the registry for use in calculating a
measure, that is consistent with the goals of end-to-end electronic
reporting, stimulating innovation in the use of standards to re-use
data captured in the course of care to advance more timely and
affordable availability of meaningful measure measurements to help
drive continuous improvement.
Comment: Others were concerned that limiting data sources to CEHRT
alone would eliminate the potential for obtaining bonus points for many
specialties and practice types. Commenters expressed concern that their
electronic data sources cannot be certified or that financial
constraints make these resources unavailable.
Response: Bonus points apply both to measures that can be captured,
calculated, and reported only using CEHRT and to measures for which
only some of the data elements needed for the measure are currently
supported by CEHRT. For purposes of the end-to-end electronic reporting
bonus points, the pathways for those patient demographic and clinical
data that are initially captured in the eligible clinician's certified
health IT (including but not necessarily limited to those modules
required to meet the CEHRT definition for MIPS) should maintain fully
electronic management and movement from the clinician through measure
submission to CMS. For example, where a registry is calculating and
submitting MIPS-accepted measures that each use one or more data
elements captured and managed for care delivery and documentation using
certified health IT (such as, but not limited to, elements included in
the Common Clinical Data Set), this means that: (1) The eligible
clinician uses certified health IT to capture and electronically
provide to the registry those clinical data using appropriate
electronic means; and (2) the registry uses verifiable software to
process the data, calculate, and report measure results to CMS using
appropriate electronic means. Appropriate electronic means for getting
data from the certified health IT to the registry would include secure
access via API or by electronic submission of QRDA or other Clinical
Document Architecture documents, and appropriate electronic means of
measure submission from the registry to CMS would be the CMS-specified
electronic submission format.
Comment: One commenter disagreed with the decision to award bonus
points to MIPS eligible clinicians who report using their CEHRT since
their EHR vendor is charging a high fee by compiling the data and
reporting the measures themselves instead of directly from the EHR.
Response: We appreciate the commenter's concerns. We believe the
awarding of bonus points for use of CEHRT is important to incentivize
solutions, which ultimately reduces cost and burden to MIPS eligible
clinicians. Our approach also encourages clinicians to consider a range
of options to determine which health IT systems and submission
mechanisms will provide the best value to their practice. We expect
that over time, as the technology to support electronic reporting
evolves and more options become available, the cost and administrative
burden on participants leveraging these technologies will continue to
decrease.
Comment: One commenter wanted the CEHRT bonus for claims based
reporting.
Response: The CEHRT bonus is designed for submission of data
captured utilizing CEHRT. We did not propose to allow this option for
claims submission because there is no mechanism for MIPS eligible
clinicians to identify the information included in the claims
submission was pulled using CEHRT.
Comment: One commenter was concerned that there are fewer EHR
products available that can provide the reporting functionality
necessary to carry out the MIPS requirements. One commenter noted that
CEHRT standards fall short of providing QRDA or appropriate
functionality without errors.
Response: ONC's 2014 Edition and 2015 Edition Health IT
Certification criteria \31\ do align with the Quality Payment Program
requirements. Specifically, the 2015 Edition, while not required for
2017, offers rigorous testing for more features and functionality than
have prior editions of certification. Each developer will need to
decide how best to support the needs of its users, but we expect that
between now and 2018, when the MIPS requirements to use technology
certified to the 2015 Edition will be in full effect, that more
products will be certified to the 2015 Edition in order to support
their users' needs for MIPS program participation. As CMS and ONC
assess the impact of our policies and learn from the transition year of
the Quality Payment Program (along with health IT vendors and MIPS
eligible clinicians and groups) we will continue advancing health IT
certification infrastructure and support in parallel to the needs of
developers, clinicians, and other care providers to encourage the
continued development, adoption and use of certified health IT
including quality measurement standards to increase the availability of
standards-based, interoperable data management and aggregation
technology.
---------------------------------------------------------------------------
\31\ 45 CFR 170.314(c)(1) through (3) and 170.315(c)(1) through
(3) and optionally (c)(4).
---------------------------------------------------------------------------
After consideration of the comments, we are finalizing at Sec.
414.1380(b)(1)(xiv) one bonus point is available for each measure
submitted with end-to-end electronic reporting for a quality measure
under certain criteria described in this section. We are modifying the
CEHRT bonus cap. Specifically, we are increasing the cap for using
CEHRT for end-to-end reporting from 5 percent to 10 percent of the
denominator of the quality performance category (total possible points
for the quality performance category) for the first 2 years. We intend
to decrease this cap in future years through future notice and comment
rulemaking. MIPS eligible clinicians will be eligible for both the
CEHRT bonus option and the high priority bonus option with separate
bonus caps for each option. The CEHRT bonus will be available to all
submission mechanisms except claims submissions.
We are finalizing that the CEHRT bonus would be available to all
submission mechanisms except claims submissions. Specifically, MIPS
eligible clinicians who report via qualified registries, QCDRs, EHR
submission mechanisms, and CMS Web Interface in a manner that meets the
end-to-end reporting requirements may receive one bonus point for each
reported measure with a cap as described. For Web Interface users, we
define end-to-end electronic reporting as cases where users upload data
that has been electronically exported or extracted from EHRs,
electronically calculated, and electronically formatted into a CMS-
specified file that is then electronically uploaded via the Web
Interface as opposed to cases where measures are entered manually into
the CMS Web Interface.
Due to requests from many commenters that we provide more clarity
around the various options for a MIPS eligible clinician to satisfy the
``end-to-end electronic'' requirements and to earn the CEHRT bonus
points, we are providing additional explanation regarding the final
policy.
[[Page 77298]]
There are several key steps common across all of the submission
pathways for end-to-end electronic reporting: (1) The collection of
data at the point of care; (2) calculation of CQM performance as a
numerator/denominator ratio; and (3) submission of the data to CMS
using a standard format. ONC's certification regulations (45 CFR
170.315(c)(1) through (3) in the 2015 edition) have established several
independent but complementary quality measurement capability criteria
to which health IT modules can be certified because some health IT may
not support all of the steps in the measurement process. For example,
one application may support capturing the clinical data at the point of
care (step 1), but not the calculation of measure results (step 2) or
reporting of them to payers like CMS (step 3). Instead, that
application may be built to export the measurement data in standard
format to another application that performs the calculation and
reporting functions but may not support initial data capture provide
that feature. Some health IT applications are capable of performing
each step necessary from data capture to CMS submission.
Although certification for each of these steps helps to ensure
accurate calculation and reporting measures, our final policy seeks to
offer MIPS eligible clinicians the opportunity to earn bonus points for
a wider array of measurement pathways rather than the EHR submission
method currently available only for eCQMs for which a health IT
product, service, or registry could be certified under ONC's Health IT
Certification Program as being in conformance with CMS-published
specifications. At this time, we believe it is important to ensure
incentives are tied to a wider array of submission pathways that
facilitate automated, electronic reporting.
However, we continue to believe that standards-based, interoperable
methods for managing quality measurement data are essential for both
improving the value of measures to eligible clinicians while reducing
these clinicians' data-handling burdens.
In a 2014 concept paper, Connecting Health and Care for the Nation:
A 10-Year Vision to Achieve an Interoperable Health IT
Infrastructure,\32\ ONC described how interoperability is necessary for
a ``learning health system'' in which health information flows
seamlessly and is available to the right people, at the right place, at
the right time to better inform decision making to improve individual
health, community health, and population health. The vision that ONC
and CMS share for health IT in the learning health system is that it
will integrate seamlessly with efficient, clinical care processes,
while sustaining strong protections for the security and integrity of
the data. Within that infrastructure, quality improvement support
functions are increasingly expected to enable and rely upon the
seamless aggregation, routine analysis, and automated electronic
management of data needed to deliver meaningful, actionable feedback on
clinician performance and treatment efficacy while minimizing data-
related burdens on clinicians. As we implement, observe, and learn from
the transition year of the Quality Payment Program, CMS and ONC will
continue working in close partnership to enable ONC to continue
advancing health IT certification infrastructure in parallel to the
needs of clinicians, other providers, consumers, purchasers, and payers
who will increasingly rely on standards-based, interoperable data
management and aggregation technology to better measure and
continuously improve safety, quality, and value of care.
---------------------------------------------------------------------------
\32\ https://www.healthit.gov/sites/default/files/ONC10yearInteroperabilityConceptPaper.pdf.
---------------------------------------------------------------------------
Table 18, summarizes at a high level several pathways we expect to
be widely available to MIPS eligible clinicians in 2017 and 2018 for
quality performance reporting and which of these pathways would earn
bonus points for use of CEHRT to report quality measures electronically
from capture to CMS (``end-to-end'').
Table 18--Examples Illustrating How End-to-End Electronic Reporting
Requirements Work
------------------------------------------------------------------------
Then meets end-to-
MIPS Eligible clinician scenario Actions taken end reporting
bonus
------------------------------------------------------------------------
(1) Uses health IT certified to MIPS eligible Yes.
Sec. 170.314 or Sec. clinician uses
170.315(c)(1) through (3)--that their e-measure-
is, the MIPS eligible certified health
clinician's system is certified IT to submit MIPS
capable of capturing, eCQM to CMS via
calculating, and reporting MIPS EHR data
eCQMs. submission
mechanism
(described at 42
CFR 414.1325).
(2) Uses health IT certified to The third-party Yes.
Sec. 170.314 or Sec. intermediary is
170.315(c)(1) to capture data certified to be
and export MIPS eCQM data in conformance
electronically to a third-party with Sec.
intermediary. 170.415(c)(2-3)
(import data/
calculate, report
results) for each
measure; and
calculates and
submits MIPS
eCQMs.
(3) Uses health IT certified to QCDR uses Yes.
Sec. 170.314 or Sec. automated,
170.315(c)(1) to capture data verifiable
and export a MIPS eCQM software to
electronically to a QCDR. process data,
calculate and
electronically
report to a MIPS
eCQM to CMS
consistent with
CMS-vetted
protocols.
(4) Uses certified health IT, QCDR uses Yes.
including but not necessarily automated,
limited to that needed to verifiable
satisfy the definition of CEHRT software to
at Sec. 414.1305, to capture process data,
demographic and clinical data calculate and
and transmit it to a QCDR using electronically
appropriate Clinical Document report to MIPS
Architecture standard (such as approved non-MIPS
QRDA or C-CDA). measures
consistent with
CMS-vetted
protocols.
(5) Uses certified health IT, The third-party Yes.
including but not necessarily intermediary uses
limited to that needed to automated,
satisfy the definition of CEHRT verifiable
at Sec. 414.1305, to capture software to
demographic and clinical data. process data,
Makes data available to a third- calculate and
party intermediary via secure electronically
application programming report to MIPS
interface (API). approved non-MIPS
measures
consistent with
CMS-vetted
protocols.
[[Page 77299]]
(6) Uses certified health IT, The eligible No; manual entry
including but not necessarily clinician or interrupted data
limited to that needed to group, or a third- flow and
satisfy the definition of CEHRT party electronic
at Sec. 414.1305, to capture intermediary uses calculation is
demographic and clinical data automated, not verified.
and transmit it to the third- verifiable
party intermediary using software to
appropriate standard or method process data,
(QRDA, C-CDA, API). calculate and
reports to MIPS
approved measures
through manual
entry, or manual
manipulation of
an uploaded file,
into a CMS web
portal.
(7) Uses certified health IT to The third-party No; manual
support patient care and intermediary uses abstraction
capture data but abstracts it automated, interrupted data
manually into a web portal or verifiable flow.
abstraction-input app. software to
process data,
calculate and
report measure.
------------------------------------------------------------------------
In the first example in Table 18, for MIPS eCQMs, when a MIPS
eligible clinician wishes to use CEHRT for the entire process of data
capture to CMS submission, the health IT solution must be certified to
Sec. 170.315(c)(1) through (3) in order to achieve the bonus point.
In the second and third examples, the MIPS eligible clinician has
chosen to participate in a registry or QCDR and report eCQMs. This MIPS
eligible clinician sends quality data electronically from CEHRT to the
registry, and the registry calculates the measure results and
eventually submits the eCQMs data to CMS on the eligible clinician's
behalf. In the second case, the registry uses health IT that is
certified to Sec. 170.315(c)(2) through (3) in order for the MIPS
eligible clinician to earn the bonus points for end-to-end electronic
reporting. In the third case, the QCDR does not use health IT that is
certified to a particular standard, but uses automated, verifiable
software to process data, calculate and electronically report a MIPS
eCQM to MIPS consistent with CMS-vetted protocols. In both of these
cases, a MIPS eligible clinician or group would earn a bonus point for
each measure submitted in this manner, up to a 10 percent cap.
In both the fourth and fifth examples, the MIPS eligible clinician
has chosen to participate in a QCDR and report on the MIPS-accepted
non-MIPS (registry) measures. The MIPS eligible clinician uses CEHRT,
and perhaps some additional certified health IT modules, in the normal
course of clinical documentation and this certified health IT captures
clinical data needed for the MIPS eligible clinician's selected
registry measures. In both the fourth and fifth examples, the QCDR has
satisfied the MIPS criteria, including obtaining CMS' approval of the
non-MIPS measures this MIPS eligible clinician is using. In these
cases, the QCDR processes the clinical data to calculate measure
results and reports the MIPS-approved non-MIPS measures consistent with
CMS-vetted protocols. The only difference between these two examples is
how the data gets from the MIPS eligible clinician's certified health
IT to the QCDR. In the fourth example, the MIPS eligible clinician's
certified health IT transmits quality data documents to the registry in
QRDA or other Clinical Document Architecture standard format. In the
fifth example, the MIPS eligible clinician has made appropriate
arrangements to grant the registry access to the quality measurement
information via a secure application programming interface (API). We
have presented both examples to emphasize that the MIPS eligible
clinician would receive the bonus point under each scenario. Either the
secure transmission of data within CDA documents or a secure API is an
electronic method of managing and moving the quality measurement data
to where it is needed.
In the sixth example, the group, or a third party submitting data
on their behalf, may use the CMS Web Interface to submit electronic
data for quality measure submissions. However, such a submission would
only be awarded the bonus for end-to-end reporting if the submission
included uploading an electronic file without modification. This is to
preserve the electronic flow of data end-to-end and provide a
verifiable method to ensure that manual abstraction, manual
calculation, or subsequent manual correction or manipulation of the
measures using abstraction did not occur.
Finally, in the last example, the MIPS eligible clinician initially
captures data electronically, but manually abstracts the data for
analysis and keys it into a web portal used by a registry. The registry
then calculates and submits the measure results to CMS electronically.
In this case, no bonus point would be given as the manual abstraction
process interrupted the complete end-to-end electronic data flow.
(g) Calculating the Quality Performance Category Score
The next two subsections provide a detailed description of how the
quality performance category score would be calculated under our
finalized policies.
(i) Calculating the Quality Performance Category Score for Non-APM
Entity, Non-CMS Web Interface Reporters
To calculate the quality performance category score, we proposed to
sum the weighted points assigned for the measures required by the
quality performance category criteria plus the bonus points and divide
by the weighted sum of total possible points. (81 FR 28256)
If a MIPS eligible clinician elects to report more than the minimum
number of measures to meet the MIPS quality performance category
criteria, then we would only include the scores for the measures with
the highest number of assigned points. In the proposed rule (81 FR
28257), we provided an example for how this logic would work. The
quality performance category score would be capped at 100 percent.
We proposed that if a MIPS eligible clinician has met the quality
performance category submission criteria for reporting quality
information, but does not have any scored measures as discussed in
section II.E.6.b.(2) of the proposed rule, then a quality performance
category score would not be calculated. Refer to section II.E.6.a.2.d.
of the proposed rule (81 FR 28254) for details on how we proposed to
address scenarios where a quality performance category score is not
calculated for a MIPS eligible clinician. We requested comment on our
proposals to calculate the quality performance category score.
The following is summary of the comments we received on our
proposals to calculate the quality performance category score.
Comment: Several commenters expressed concern about the complexity
of the scoring approach. One commenter
[[Page 77300]]
recommended taking an average of the performance percentages as an
alternative.
Response: We have simplified our methodology for scoring the
quality performance category. For example, during the transition year,
we are adding a floor of 3 points for any submitted measure (class 1 or
class 2 measures as defined in Table 17, as discussed in section
II.E.6.a.(2)(c) of this final rule with comment period). This
adjustment will minimize the number of measures that are not scored and
stabilize the denominator of the MIPS quality performance category
score. This also ensures that all MIPS eligible clinicians will have a
quality performance category score. As discussed in the Web Interface
scoring section in section II.E.6.a.(2)(g)(ii), we are not scoring
measures that lack a benchmark or are below case minimum if the measure
meets data completeness criteria.
Comment: Several commenters supported our proposal to use the top
six scored measures.
Response: We appreciate the support and we are finalizing the
proposal to score the top six scored measures for all submission
mechanisms except CMS Web Interface. The required number of measures
for CMS Web Interface is discussed in section II.E.5.b.(3)(a)(ii) of
this final rule with comment period.
Comment: One commenter disagreed with the ability to report more
than 6 measures because not all groups had the same option to report
additional measures given the availability of measures.
Response: With the exception of the CMS Web Interface submission
mechanism (other than the CAHPS for MIPS survey), groups are allowed to
report additional measures. We note that groups, outside of the MIPS
APM scoring standard, have the option to choose whether they will
report via the CMS Web Interface or another submission mechanism. With
regard to the availability of measures, we will continue to monitor
trends to identify areas where further measure development is needed.
After consideration of the comments, we are finalizing our policy
at Sec. 414.1380(b)(1)(xv) to calculate the quality performance
category score as proposed. We will sum the points assigned for the
measures required by the quality performance category criteria plus the
bonus points and divide by the weighted sum of total possible points.
The quality performance category score cannot exceed the total possible
points for the quality performance category. If a MIPS eligible
clinician elects to report more than the minimum number of measures to
meet the MIPS quality performance category criteria, then we will only
include the scores for the measures with the highest number of assigned
points, once the first outcome measure is scored, or if an outcome
measure is not available, once another high priority measure is scored.
We are finalizing our proposal that if a MIPS eligible clinician
does not have any scored measures, then a quality performance category
score will not be calculated. However, we also note that during the
transition year, with the implementation of the 3-point floor for class
2 measures as described in Table 17 that all MIPS eligible clinicians
who are non-CMS Web Interface users, that submit some quality data will
have a quality performance category score in year 1 of MIPS. The MIPS
eligible clinician will receive:
3 points for submitted measures that do not meet the
minimum case size, do not have a benchmark or do not meet data
completeness criteria, even if the measure is reported with a 0 percent
performance rate.
3 points or more for submitted or calculated measures that
meet the minimum case size, have a benchmark and meet data completeness
criteria, even if the measure is reported with a 0 percent performance
rate.
However, as we will illustrate below, because we have changed the
performance standards, submission criteria, and other scoring elements,
we believe the scoring system will be simpler to understand and that it
will reduce burden on MIPS eligible clinicians trying to achieve a
higher quality performance category score. Thus, based on public
comments, we are adjusting multiple parts of our proposed scoring
approach to ensure that we do not unfairly penalize MIPS eligible
clinicians who have not had time to prepare adequately to succeed under
our proposed approach while still rewarding high performers.
For example, we are no longer requiring a cross-cutting measure for
patient facing MIPS eligible clinicians, as discussed in section
II.E.5.(b)(3) of this final rule with comment period. Additionally, we
are no longer requiring two of the 3 population health measures and are
only requiring the all-cause hospital readmission measure for groups of
16 or more clinicians instead of our proposed approach of groups of 10
or more, assuming the case minimum of 200 cases has been met, as
discussed in section II.E.5.b.(6) of this final rule with comment
period. If the case minimum of 200 cases has not been met, we will not
score this measure. Thus, the MIPS eligible clinician will not receive
a zero for this measure, but rather this measure will not apply to the
MIPS eligible clinician's quality performance category score.
We also note that if a group of 16 or more, does not report any
quality performance category data, the group would be scored on the
all-cause readmission measure (assuming the group meets the readmission
measure minimum case size requirements) even if they did not submit any
other quality performance category measures if they submitted
information in other performance categories. If a group of 16 or more
did not report any information in any of the performance categories,
then the readmission measure would not be scored.
We are now capping both the high priority bonus and the CEHRT bonus
at 10 percent instead of 5 percent of the denominator of the quality
performance category score. Further, all measures reported can now be
scored with a floor of 3 points even if the measure is below the case
minimum, lacks a benchmark or is below the completeness requirement. We
believe that all of these modified elements, when combined, will
significantly increase participation in the MIPS, will reduce burden
and confusion on MIPS eligible clinicians and will allow MIPS eligible
clinicians to gain experience under the MIPS while penalties are
smaller in nature.
For example, a MIPS eligible clinician who is in a group of 20
practitioners that reports as a group, and reports 4 quality measures
instead of the required 6 measures. Of the 4 measures submitted, which
include an outcome measure, each measure has a performance rate that is
low. The clinician is also scored on an additional measure, the all-
cause hospital readmission measure, but has a poor performance rate on
this measure as well. Under this revised scoring approach, we allow all
MIPS eligible clinicians who submit quality measures to receive a 3-
point floor per measure in the quality performance category. Under this
scenario, the MIPS eligible clinician receives the 3-point floor for
each of the 4 submitted measures and the all-cause hospital readmission
measure. The MIPS eligible clinician's quality performance category
weighted score is calculated as follows: 5 measures x 3 points each/
total possible points of 70 points x (quality performance category
weight of 60) = 12.9 points towards the final score.
In another example, a MIPS eligible clinician who is a solo
practitioner reports all 6 measures, including an outcome measure,
although all are
[[Page 77301]]
below the required case minimum. The eligible clinician receives a
floor of 3 points for all 6 measures in the quality performance
category even though the measures are below the 20 case size minimum.
Under this scenario, the MIPS eligible clinician's quality performance
category weighted score is calculated as follows: 6 measures x 3 points
each/total possible points of 60 points x (quality performance category
weight of 60), or 18/60 x 60 = 18 points towards the final score. We
note that we did not include the all-cause hospital readmissions
measure in the above quality performance category calculation since it
is not applicable to groups of 15 or fewer clinicians and solo
practitioners and MIPS individual reporters due to reliability
concerns.
In another example, a MIPS eligible clinician is in a group of 25
that reports as a group via registry 3 process measures, 1 outcome
measure, 1 other high priority (for example, patient safety) measure
and 1 process measure that is below the case minimum requirement. Two
of the process measures and one outcome measure qualify for the CEHRT
bonus. Measures that do not meet the required case minimum or do not
have a benchmark or fall below the data completeness requirement will
be given 3 points. We emphasize that these measures are treated
differently than a required measure that is not reported. Any required
measure that is not reported would receive a score of zero points and
be considered a scored measure. Table 19 illustrates the example.
Table 19--Quality Performance Category Example With High Priority and CEHRT Bonus Points
--------------------------------------------------------------------------------------------------------------------------------------------------------
Quality bonus
Measure Measure type Number of Points based on Total possible Quality bonus points for points for
cases performance points high priority CEHRT
--------------------------------------------------------------------------------------------------------------------------------------------------------
Measure 1........................... Outcome Measure using 20 4.1 10 0........................ 1
CEHRT. (required)...............
Measure 2........................... Process using CEHRT.... 21 9.3 10 N/A...................... 1
Measure 3........................... Process using CEHRT.... 22 10 10 N/A...................... 1
Measure 4........................... Process................ 50 10 10 N/A...................... N/A
Measure 5........................... High Priority (Patient 43 8.5 10 1........................ N/A
Safety).
Measure 6........................... Process below case 10 3 10 N/A...................... N/A
minimum.
All-Cause Hospital Readmission...... Claims................. 205 5 10 N/A...................... N/A
rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr
Total Points.................... All Measures........... N/A 49.9 70 1........................ 3
--------------------------------------------------------------------------------------------------------------------------------------------------------
The total possible points for the MIPS eligible clinician is 70
points. The eligible clinician has 49.9 points based on performance.
The MIPS eligible clinician also qualifies for 1 bonus point for
reporting an additional high priority patient safety measure and 3
bonus points for end-to-end electronic reporting of quality measures.
The bonus points for high priority measures and CEHRT reporting are
subject to two separate caps, which are each 10 percent of 70 possible
points or 7 points. The quality performance category score for this
MIPS eligible clinician is (49.9 points + 4 bonus points = 53.9)/70
total possible points x 60 (quality performance category weight) = 46.2
points towards the final score. The quality performance category score
would be capped at 100 percent.
The example in Table 20 illustrates how to calculate the bonus cap
for the high priority measure bonus and the CEHRT bonus. In this
scenario, the MIPS eligible clinician is a solo practitioner who has
submitted 6 measures, as an individual, all above the case minimum
requirement. Since the MIPS eligible clinician is a solo practitioner,
the all-cause hospital readmission measure does not apply. The MIPS
eligible clinician below successfully submitted six quality measures
using end-to-end electronic reporting, and therefore, qualifies for the
CEHRT bonus of one point for each of those measures. In addition to
CEHRT bonus points, the MIPS eligible clinician reported 4 outcome
measures (6 bonus points), a patient experience measure (2 bonus
points) and a care coordination measure (1 bonus point) for 9 total
high priority bonus points. The MIPS eligible clinician receives 2
bonus points for the second, third and fourth outcome measures, given
that no bonus points are given for the first required measure. However,
the number of high priority measure bonus points (9 points) is over the
cap (which is 10 percent of 60 possible points or 6 points), and the
number of CEHRT bonus points (6 points) is at the cap (which is 10
percent of 60 possible points or six points). The quality performance
category score for this MIPS eligible clinician is 50.8 + 6 CEHRT bonus
points + 6 high priority bonus points/60 points = 62.8/60 or 100
percent since the overall number of points is capped at 60 or 100
percent score. Note, in section II.E.5.b.(2) of this final rule with
comment period, we proposed to weight the quality performance category
at 60 percent of the MIPS final score, so a 100 percent quality
performance category score would account for 60 percent of the final
score.
Table 20--Quality Performance Category Bonus Cap Example
----------------------------------------------------------------------------------------------------------------
Points based Quality bonus Quality bonus
Measure Measure type on Total possible points for points for
performance points high priority CEHRT
----------------------------------------------------------------------------------------------------------------
Measure 1..................... Outcome Measure 4.1 10 * 0 1
using CEHRT.
Measure 2..................... Outcome Measure 9.3 10 2 1
using CEHRT.
Measure 3..................... Patient 10 10 2 1
Experience
using CEHRT.
[[Page 77302]]
Measure 4..................... High Priority 10 10 1 1
(Care
Coordination)
measure using
CEHRT.
Measure 5..................... Outcome measure 9 10 2 1
using CEHRT.
Measure 6..................... Outcome measure 8.4 10 2 1
using CEHRT.
---------------------------------------------------------------
Total..................... ................ 50.8 60 9 6
-------------------------------------------------
Cap applied to Bonus Categories 10% x total .............. .............. 6 6
possible points to calculate the high priority
bonus cap and 10% x total possible points to
calculate the CEHRT bonus cap.
----------------------------------------------------------------------------------------------------------------
Total with high priority and ** 60...........
CEHRT Bonus.
----------------------------------------------------------------------------------------------------------------
* Required.
** Given we cap the quality performance category score at 60.
(ii) Calculating the Quality Performance Category for CMS Web Interface
Reporters
CMS Web Interface reporters have different quality performance
category submission criteria; therefore, we proposed to modify our
scoring logic slightly to accommodate this submission mechanism. CMS
Web Interface users report on the entire set of measures specified for
that mechanism. Therefore, rather than scoring the top six reported
measures, we proposed to score all measures. If a group does not meet
the reporting requirements for one of the measures, then the group
would receive 0 points for that measure. We note that since groups
reporting through the CMS Web Interface are required to report on all
measures, and since some of those measures are ``high priority,'' these
groups would always have some bonus points for the quality performance
category score if all the measures are reported. That is, the group
would either report on less than all CMS Web Interface measures, in
which case the group would receive zeroes for unreported measures, or
the group would report on all measures, in which case the group would
automatically be eligible for bonus points. The other proposals for
scoring discussed in section II.E.6.a.2.(g)(i) of the proposed rule,
including bonus points, would still apply for CMS Web Interface. We
requested comment on this proposal.
The following is a summary of the comments we received regarding
our proposal to score CMS Web Interface.
Comment: Some commenters requested that we apply the policy of
scoring only the six highest scoring measures to the CMS Web Interface.
Response: For other submission mechanisms, MIPS eligible clinicians
are required to report 6 measures; therefore, we are scoring 6 required
measures. In contrast, in the transition year, the CMS Web Interface
reporters are required to report 13 individual measures, and a 2-
component diabetes composite measure. We believe it would be
appropriate to score all the required measures. However, we note that 3
measures do not have a benchmark in the Shared Saving Program;
therefore, we will only score those measures with a benchmark. For the
transition year, measures with a benchmark include 10 individual
measures and the 2-component diabetes composite measure for a total of
11 measures with benchmarks. CMS Web Interface reporters are required
to report on more than 6 measures; they are required to report on 13
individual measures and the 2-component diabetes composite measure, but
are only scored in the transition year on 11 (10 individual measures
and the 2-component diabetes composite measure) of the total 14
required measures given that only 11 measures have a benchmark.
Therefore, we believe we have a comparable number of measures scored in
CMS Web Interface (11measures with benchmarks) compared to other
reporting mechanisms (6 measures). In addition, we think this policy to
not score measures without a benchmark is consistent with Shared
Savings Program and NextGen ACO programs which do not measure
performance on selected measures. Table 21 shows the number of CMS Web
Interface measures and indicates which have benchmarks and which are
high priority measures that would be eligible for bonus points. The
first required outcome measure would not receive bonus points. For the
two-component diabetes composite measure, both components of the
measure would need to be submitted to qualify as a high priority
measure.
Table 21--Finalized Quality Measures Available for MIPS Web Interface Reporting in 2017
--------------------------------------------------------------------------------------------------------------------------------------------------------
2017 Shared savings
Count NQF/Q # ACO # Measure title & description High priority program benchmark
designation (yes/no)
--------------------------------------------------------------------------------------------------------------------------------------------------------
1........................ 0059/001................. ACO-27.................. 2- Component Diabetes Composite * Yes, diabetes
Measure: Diabetes: Hemoglobin composite benchmark
A1c (HbA1c) Poor Control only.
(>9%): Percentage of patients
18-75 years of age with
diabetes who had hemoglobin
A1c > 9.0% during the
measurement period.
[[Page 77303]]
0055/117................. ACO-41.................. Diabetes: Eye Exam: Percentage
of patients 18-75 years of age
with diabetes who had a
retinal or dilated eye exam by
an eye care professional
during the measurement period
or a negative retinal or
dilated eye exam (no evidence
of retinopathy) in the 12
months prior to the
measurement period.
2........................ 0097/046................. ACO-12.................. Medication Reconciliation Post- * No.
Discharge: The percentage of
discharges from any inpatient
facility (e.g. hospital,
skilled nursing facility, or
rehabilitation facility) for
patients 18 years and older of
age seen within 30 days
following discharge in the
office by the physician,
prescribing practitioner,
registered nurse, or clinical
pharmacist providing on-going
care for whom the discharge
medication list was reconciled
with the current medication
list in the outpatient medical
record.
This measure is reported as
three rates stratified by age
group:.