The US health care system developed largely through the private sector. No major government health insurance programmes operated until the mid-1960s and most government involvement until then was through state rather than federal regulations. While more Americans have private rather than public insurance – and the 2006 inclusion of prescription drugs under Medicare and the 2010 comprehensive reforms both relied on expanding the private insurance market – public and private sector spending are now roughly equal (see section 3.1). This is primarily because Medicare beneficiaries – seniors and the permanently disabled population – are more costly to cover than others.

Through most of the nineteenth century many different types of practitioner in the United States competed to provide care, much of which was of poor quality (Starr, 1982). Physicians typically had neither particularly high incomes nor social status. This changed only gradually towards the beginning of the twentieth century with the confluence of various factors, including a more scientific basis for medicine, improvements in medical training and the quality of hospitals, and consolidation of competing physician interests under the auspices of local (county) and state medical societies and nationally through the American Medical Association (AMA).

The 1910 publication of the Flexner Report represented a turning point in US health policy. Commissioned by the Carnegie Foundation, the report provided a detailed account of the poor quality of most US medical schools at the time. This eventually led to the closure of some of the worst facilities, and improvements in medical school curricula, the length of training, the quality of admitted students and the training facilities. As a result, individuals faced higher barriers in entering the field.

During the latter part of the nineteenth century and the first part of the twentieth century hospitals also changed dramatically. Previously their reputation was poor; they were places to be avoided by those who had alternatives (i.e., people who could afford it received care in their home), and they mainly served the poor. As the scientific basis of medicine improved, facilities were enhanced and physicians became better trained – the hospital was transformed. The modern hospital largely evolved as a not-for-profit organization wherein physicians were granted privileges to treat their own patients. This was particularly appealing to the medical community because physicians could avail themselves of the latest technology and a cadre of trained nurses free of charge – which has been dubbed a “rent-free workshop” (Gabel & Redisch, 1979).

Private health insurance in the United States had its beginnings around the early 1930s, with the establishment of non-profit Blue Cross plans for hospital care, and soon thereafter Blue Shield plans for physician care. The genesis of Blue Cross was a desire for hospital coverage on the part of workers and employers on the one hand, and on the other the need for a steady stream of revenues on the part of hospitals mired in the Great Depression. The first hospital insurance plan began in 1929 in Dallas, Texas. In other parts of the country hospitals banded together to provide this coverage under the auspices of Blue Cross, allowing enrollees to have the freedom to choose their own hospital. These arrangements were non-profit and did not require the cash reserves typical of private insurance because hospitals guaranteed the provision of services, which was possible because of empty beds during the Depression (Starr, 1982). Near the end of the 1930s Blue Shield plans that covered physicians’ services were established under similar principles: non-profit status and free choice of provider.

Blue Cross and Blue Shield plans began to encounter competition from commercial (for-profit) insurers, particularly after the Second World War. While the Blues had, until that time, used “community rating” (where all contracting groups pay the same price for insurance), commercial insurers employed “experience rating” (where premiums vary based on the past health status of the insured group), allowing them to charge lower prices to employer groups with lower expected medical expenses. Eventually, the Blues had to follow suit and switch to experience rating to remain competitive, blurring the distinction between the non-profit and for-profit insurers (Law, 1974; Starr, 1982). By 1951 more Americans obtained their hospital insurance from commercial insurers than from Blue Cross (Law, 1974). More recently, a number of Blue Cross and Blue Shield plans have reorganized to become for-profit organizations.

The number of Americans with private health insurance coverage grew dramatically in the 1940s and 1950s. While only 6 million had some type of health insurance coverage in 1939, this had risen to 75 million people – half the population – by 1950, that is, in only about a 10-year period. By the time Medicare and Medicaid were enacted in 1965, insurance coverage (public and private) had further expanded to 156 million – 80% of the population (Jost, 2007).

The tremendous growth rate in private insurance during this period was due in part to the fact that employer contributions to employee private health insurance plans were not considered taxable income for the employee (Gabel, 1999; Helms, 2008). There were other reasons for the expansion of private insurance through employment, however. Unions negotiated for coverage for their members and this was viewed as an important benefit because health care costs were rising at the time (Jost, 2007). There are also economies of scale involved in purchasing through a group, and premiums tend to be lower since there is less concern about adverse selection. These factors, coupled with rising incomes with the onset and conclusion of the Second World War and new organizational forms to provide coverage, also help explain the growth (Cunningham, 2000). With no systematic government programme for providing coverage until the mid-1960s, this demand was satisfied in part through the employment-based system, at least for many of those in the workplace.

In 1965 the first major federal health insurance programmes, Medicare and Medicaid, were established. Prior to their creation, a variety of indigent and charity care programmes existed for low-income patients. In one such programme, begun in 1950, the federal government matched state payments to medical providers for those receiving public assistance. In another, the Kerr-Mills Act of 1960 provided assistance to states to help seniors who were not on public assistance, but who required help with their medical bills (US Department of Health and Human Services, 2000).

Medicare covered Americans aged 65 and older, and Medicaid covered about half of those with low incomes.[2] At its inception, Medicare was divided into two parts. Part A: Hospital Insurance was social insurance in that it was funded by payroll taxes on the working population. Part B: Supplemental Medical Insurance covered outpatient and physicians’ visits and, although voluntary, was purchased by nearly all seniors since 75% of the premiums were paid from general federal revenues. Medicaid, in contrast, reflected a welfare model in that only those who met both income and certain categorical eligibility requirements (e.g. children under the age of 18 and female adults with children) could receive the coverage, which was largely provided free of patient charges. As discussed in section 3.3.2, states have had some flexibility in defining who is eligible for coverage. However, those that chose to expand Medicaid eligibility as part of the ACA need to meet specific federal requirements.

Prior to the enactment of Medicare, it was common for elderly Americans to be without health insurance. Just over half of Americans aged 65 and older had hospital coverage before 1963, with far fewer being insured for surgery or outpatient care (US Department of Health and Human Services, 2010a). Moreover, hospital coverage among seniors prior to 1963 varied by region, from a low of 43% to a high of 68% (Finkelstein, 2005). However, since Medicare was passed into law, almost all Americans aged 65 and over are covered for hospital and physician care.

Another key result from the passage of Medicare was the desegregation of hospitals in the south of the United States. The country’s history with regard to race is shameful; segregation of health care facilities is a prime example. According to Jill Quadagno (2000, p. 69), racial discrimination was as pervasive in the health care system as in other social institutions. Many hospitals maintained “white” and “colored” floors, labelled equipment by race, and reserved a certain number of beds for patients of each race. Throughout the South, black doctors were refused staff privileges and black students were excluded from nurse training programs. Fourteen southern states had constructed entire hospital systems based on the principle of “separate but equal”. The problem was not confined to the South. In northern cities, too, many hospitals segregated black patients from white patients and discriminated against black health care workers.

Medicare changed this by tying programme funding to the integration of facilities – a requirement of the newly enacted Civil Rights Act of 1964. Within just six months of the programme’s implementation, “nearly every hospital in the country admitted patients regardless of race. Most racially segregated wards were dismantled, and hospitals began granting staff privileges to black physicians” (Quadagno, 2004, p. 178).

Passage of the Medicare legislation – which is Title XVIII of the Social Security Act, whose current title is “Health Insurance for the Aged and Disabled” – was difficult. Proposals to cover seniors had been before Congress for more than a decade but did not make headway in part due to opposition from organized medicine.[3] Passage of the legislation did not occur until a number of compromises were made, including payments to hospitals based on their costs, payments to physicians based on their charges, and the use of private insurers to administer the programme. Eventually the federal government moved to enact payment reforms to control Medicare costs. In 1983 Congress adopted the diagnosis-related groups (DRGs) system for Medicare, which changed hospital reimbursement from a system based on costs to one involving a fixed prospective payment based on the patient’s diagnosis. Then in 1989 Congress enacted a Medicare fee schedule for physicians in the form of a resource-based relative value scale (RBRVS) to replace the previous charge-based system, with further controls being put on annual rates of increase in aggregate programme payments. The RBRVS system also aimed to reduce the gap in payments for provision of primary care services compared to specialist services (for more on payment mechanisms see Box3.5).

Box3.5

One notable gap in Medicare benefits was outpatient prescription drug coverage. In 1988 the Medicare Catastrophic Coverage Act was signed into law. The law added drug coverage as well as other provisions related particularly to long-term care, but Congress repealed it just a year later. One reason was that the new benefit was to be funded entirely by Medicare beneficiaries. Many of them, however, already had supplemental prescription drug coverage through a former employer. There was also tremendous confusion about what the law did and did not cover (Rice, Desmond & Gabel, 1990).

Almost two decades later, in 2003, a drug benefit was successfully added to Medicare, effective January 2006. Beneficiaries obtain their drug coverage by purchasing it from private insurers, who compete for subscribers among Medicare beneficiaries. The benefit is subsidized in the order of 75% by general federal revenues.

While the United States has dabbled in health planning activities – albeit far less than many other high-income countries – those involving regulation have been out of favour for the past three decades. If one defines the concept more broadly, to include public investments aimed at increasing the supply of selected services, however, then such activities have been more prevalent.

An early planning initiative in the United States was the Hill-Burton Act, which became law in 1946. It provided grants that allowed municipalities to build or expand hospitals until a particular bed-to-population ratio was achieved. In return, hospitals were required to provide a reasonable volume of services to persons unable to pay and to make their services available to all persons residing in the facility’s area (US Department of Health and Human Services, 2010b). While the programme stopped providing funding in 1997, many hospitals are still required to continue providing charity care.

Certificate of need (CON) programmes were introduced in a number of states in the 1970s. These were designed to control hospital expenditures – primarily beds and equipment. They reached their peak soon after the passage of the National Health Planning and Resources Development Act of 1974, which required the use of CON through the establishment of local Health Systems Agencies, which were administered by local boards with a majority of members representing local consumers (Starr, 1982). Originally, hospitals needed permission from the Health Systems Agencies for investments of greater than US$ 100 000.

Most research has found that CON was not effective in controlling hospital spending. While in some areas the numbers of hospital beds grew more slowly than they might have otherwise, one spill-over was an increase in capital spending per hospital bed (Salkever & Bice, 1976). A major problem was that the Health Systems Agencies were local boards. Communities would tend to benefit from higher hospital spending (more jobs, better equipped hospitals) but would bear little of the costs since health care is largely paid for by public and private insurers (Rice & Kominski, 2014). Moreover, no funding was made available to carry out the plan (IOM, 1981; Hyman, 1982). The federal requirement that states employ CON was repealed in 1987, although most states (36 at the time of publication) still maintain a variety of CON or related regulations (NCSL, 2018a). Beyond CON, there has been little in the way of health care capital controls in the United States. In general, hospitals are not restricted by government in the purchase of medical equipment and capital expenditures.

There have been a number of initiatives to encourage providers to go into primary care or to provide services in underserved areas. One notable effort that began in the 1970s and still exists today is the National Health Service Corps, which provides scholarships and loan repayments for physicians who practise at approved sites such as federally supported health centres, rural areas, Indian Health Service clinics and public health department clinics.

A number of innovative organizational forms of health care delivery were developed in the United States. Health Maintenance Organizations (HMOs) are organizations that provide, or contract to provide, health care services on a largely prepaid basis to members through a network of providers. They existed in the United States for most of the twentieth century, although the term itself was not used until it was coined by Paul Ellwood in 1970. The first prepaid group practice was the Ross-Loos Medical Group, which began in Los Angeles, California, in 1929, and provided prepaid care to about 2000 municipal employees. The largest of the early HMOs – and still the largest today – was Kaiser Permanente, which was started by industrialist Henry J. Kaiser and physician Sidney Garfield in the 1930s for construction, steel and shipyard workers in southern California.

While early HMOs had their own dedicated physician staff, in recent years the market has shifted to the practice association and network model – sometimes called an HMO without walls. Under these arrangements the HMO contracts with multiple medical groups and hospitals to provide services to enrollees. In most arrangements, all care except for emergencies must be provided by network providers, while in others (point-of-service plans) an enrollee can go out of the network but at a substantial out-of-pocket (OOP) cost.

HMO enrolment grew rapidly beginning in the 1980s, particularly with the rise of the practice association and network models, which unlike group and staff model HMOs tended to be for-profit organizations. This stemmed, in part, from the passage of the HMO Act of 1973, which among other things required that employers with more than 25 employees that offered health insurance include at least one HMO option if one was available in their geographical area. Since the late 1990s, however, the market share of HMOs has fallen substantially. One reason for this is a managed care backlash that occurred in the mid- to late 1990s, as patients rebelled against the tight restrictions that HMOs put on such things as seeking specialist care and hospital admission (Journal of Health Politics, Policy and Law, 1999), as well making it difficult for providers to be reimbursed for care received.

Since then, Preferred Provider Organizations (PPOs) have come to dominate the private insurance market. PPOs contract with a network of providers but they tend to pay physicians on a (discounted) fee-for-service (FFS) basis and generally make it easier to seek care outside the network, in particular for specialist services. In 2017, among insured employees, 48% were in PPOs and only 24% in HMO or point-of-service plans (Claxton et al., 2017). (Most of the remainder are in high-deductible plans.) The popularity of PPOs stems in large part from their flexibility: employers can design a health benefits plan tailored to their specifications, and patients can seek care from any provider they wish but pay less out of pocket when they use their PPO’s network. Moreover, as HMOs removed many of their more onerous restrictions, their cost advantage over PPOs substantially declined (Hurley, Strunk & White, 2004). Finally, some enrollees feel more comfortable when their providers do not have strong financial incentives to control the amount of services provided.

A more recent development is the Accountable Care Organization (ACO). Briefly, ACOs are health care providers, often consortia of independent organizations, that work in concert to improve patients’ health and reduce costs. The key element is coordinating patient care across a range of settings. Participating providers and organizations are rewarded by public and/or private payers with part of the savings that may accrue, as well as for quality improvement. ACOs are described in more detail in Box3.3 in Chapter 3.

Box3.3

A final development over the past few decades has been the gradual movement towards the corporatization of medicine in the United States. Increasingly, hospitals, physician groups and insurers have been merging, thereby forming larger entities – and often becoming publicly traded – in part to take advantage of economies of scale but more often to increase their leverage in bargaining with other entities in the health care sector. To give a single example – trends in for-profit ownership in the United States – between 1980 and 2012 the percentage of for-profit organizations increased at the following rates:

In contrast, the percentage of nursing home facilities that were for-profit fell slightly, although the figure remains nearly at 70% (Rice & Unruh, 2016).

In March 2010 the United States enacted major health care reform. The ACA expanded coverage to the majority of uninsured Americans, through: (1) subsidies aimed at lower-income individuals and families to purchase coverage; (2) a mandate that most Americans obtain insurance or face a penalty; (3) a requirement that firms with over 50 employees offer coverage or pay a penalty; (4) a major expansion of Medicaid; and (5) regulating health insurers by requiring that they provide and maintain coverage to all applicants and not charge more for those with a history of illness, as well as requiring community rating, guaranteed issue, non-discrimination for pre-existing conditions, and conforming to a specified benefits package. Most of the major provisions went into effect in 2014. A brief summary of the ACA is included in Box2.2. Chapter 6 is devoted to the ACA and the specifics of the legislation are deferred until then.

Box2.2

Although the ACA did not result in universal health care coverage, it represents – along with Medicare and Medicaid – a major effort to move towards that goal. Efforts to provide comprehensive, national health insurance in the United States go back to the Great Depression, and nearly every president since Harry S. Truman – who held the position from 1945 to 1953 – proposed some form of national health insurance. Box2.1 provides a brief summary of some of these efforts.

Box2.1