Findings
The population of e-book readers is growing. In the past year, the number of those who read e-books increased from 16% of all Americans ages 16 and older to 23%. At the same time, the number of those who read printed books in the previous 12 months fell from 72% of the population ages 16 and older to 67%.
Overall, the number of book readers in late 2012 was 75% of the population ages 16 and older, a small and statistically insignificant decline from 78% in late 2011.
The move toward e-book reading coincides with an increase in ownership of electronic book reading devices. In all, the number of owners of either a tablet computer or e-book reading device such as a Kindle or Nook grew from 18% in late 2011 to 33% in late 2012. As of November 2012, some 25% of Americans ages 16 and older own tablet computers such as iPads or Kindle Fires, up from 10% who owned tablets in late 2011. And in late 2012 19% of Americans ages 16 and older own e-book reading devices such as Kindles and Nooks, compared with 10% who owned such devices at the same time last year.
This move toward e-books has also affected libraries. The share of recent library users1 who have borrowed an e-book from a library has increased from 3% last year to 5% this year.2 Moreover, awareness of e-book lending by libraries is growing. The share of those in the overall population who are aware that libraries offer e-books has jumped from 24% late last year to 31% now.3
These latest figures come from a survey by the Pew Research Center’s Internet & American Life Project which was conducted on October 15-November 10, 2012 among 2,252 Americans ages 16 and older. The margin of error is plus or minus 2.3 percentage points.
Who reads e-books
In the book-reading population, those most likely to read e-books include those with college or graduate degrees, those who live in households earning more than $75,000, and those whose ages fall between 30 and 49.
The tables below, which show increases among various demographic groups, are based on those who say they had read a book in the past 12 months, not the full population of those ages 16 and older.
Who read books in the past 12 months
In the new Pew Internet survey 75% of Americans ages 16 and older said they had read a book in any platform in the previous 12 months. That is not statistically significantly different from the 78% who in late 2011 said in a survey they had read a book in the previous 12 months. Of them:
- 89% of the book readers said they had read a printed book. This translates into 67% of all those ages 16 and older.
- 30% of the book readers said they had read an e-book. This translates into 23% of all those ages 16 and older.
- 17% of the book readers said they had listened to an audio book. This translates into 13% of all those ages 16 and older.
All told, those book readers consumed a mean (average) of 15 books in the previous 12 months and a median (midpoint) of 6 books — in other words, half had read fewer than six and half had read more than six. That breaks down as follows:
- 7% of Americans ages 16 and older read one book in the previous 12 months
- 14% had read 2-3 books in that time block
- 12% had read 4-5 books in that time block
- 15% had read 6-10 books in that time block
- 13% had read 11-20 books in that time block
- 14% had read 21 or more books in that time block
E-book borrowing from libraries
This move toward e-books has also affected libraries. The share of recent library users who have borrowed an e-book from a library has increased from 3% last year to 5% this year.
Beyond that, there is growing public awareness that the vast majority of public libraries now lend e-books. In the entire population of those ages 16 and older, the number who are aware that libraries offer e-book loans increased from 24% last year to 31% now. At the same time, there has been a drop in the number of people who do not know whether their local library has an e-book borrowing program. Now, 57% say they don’t know if their library offers e-books. Last year, 63% of those ages 16 and above did not know if their library offered e-books for borrowing.4
- The way we defined recent library users changed between 2011 and 2012. In 2011, recent library users were those who had used a public library for at least one of eight activities in the previous twelve months. In 2012, we defined recent library users as those who had done one of the following things in the previous twelve months: visited a public library in person, gone on a public library website, or used a cell phone, e-reader or tablet to visit a public library website or access public library resources. ↩
- The way we identified e-book borrowers has changed. In 2011, our question was addressed to those who had read e-books and the language was: In the past 12 months, have you used a public library to borrow or download an e-book?” This year the question was asked of all those who had used their library’s website in the past 12 months and the question language was: “In the past 12 months, have you used a public library website to borrow or download an e-book?” ↩
- In 2011, this question was asked of those who do not read e-books or those who read e-books but do not borrow them from the library. The figure cited here for 2011 is converted to all those ages 16 and older. In the recent survey it was asked of all adults. ↩
- In 2011, this question was asked of those who do not read e-books or those who read e-books but do not borrow them from the library. The figure cited here for 2011 is converted to all those ages 16 and older. In the recent survey it was asked of all adults. ↩
Acknowledgements
The Pew Research Center’s Internet & American Life Project is an initiative of the Pew Research Center, a nonprofit “fact tank” that provides information on the issues, attitudes, and trends shaping America and the world. The Pew Internet Project explores the impact of the internet on children, families, communities, the work place, schools, health care and civic/political life. The Project is nonpartisan and takes no position on policy issues. Support for the Project is provided by The Pew Charitable Trusts. More information is available at pewinternet.org.
Advisors for this research
A number of experts have helped Pew Internet in this research effort:
Larra Clark, American Library Association, Office for Information Technology Policy
Mike Crandall, Professor, Information School, University of Washington
Allison Davis, Senior Vice President, GMMB
Catherine De Rosa, Vice President, OCLC
LaToya Devezin, American Library Association Spectrum Scholar and librarian, Louisiana
Amy Eshelman, Program Leader for Education, Urban Libraries Council
Sarah Houghton, Director, San Rafael Public Library, California
Mimi Ito, Research Director of Digital Media and Learning Hub, University of California Humanities Research Institute
Patrick Losinski, Chief Executive Officer, Columbus Library, Ohio
Jo McGill, Director, Northern Territory Library, Australia
Dwight McInvaill, Director, Georgetown County Library, South Carolina
Bobbi Newman, Blogger, Librarian By Day
Carlos Manjarrez, Director, Office of Planning, Research and Evaluation, Institute of Museum and Library Services
Johana E. Orellana-Cabrera, American Library Association Spectrum Scholar and librarian at City of Irvine (CA) Public Libraries
Mayur Patel, Vice President for Strategy and Assessment, John S. and James L. Knight Foundation
Sharman Smith, Executive Director, Mississippi Library Commission
Michael Kelley, Editor in Chief, Library Journal
Mike Crandall, Professor, Information School, University of Washington
Allison Davis, Senior Vice President, GMMB
Catherine De Rosa, Vice President, OCLC
LaToya Devezin, American Library Association Spectrum Scholar and librarian, Louisiana
Amy Eshelman, Program Leader for Education, Urban Libraries Council
Sarah Houghton, Director, San Rafael Public Library, California
Mimi Ito, Research Director of Digital Media and Learning Hub, University of California Humanities Research Institute
Patrick Losinski, Chief Executive Officer, Columbus Library, Ohio
Jo McGill, Director, Northern Territory Library, Australia
Dwight McInvaill, Director, Georgetown County Library, South Carolina
Bobbi Newman, Blogger, Librarian By Day
Carlos Manjarrez, Director, Office of Planning, Research and Evaluation, Institute of Museum and Library Services
Johana E. Orellana-Cabrera, American Library Association Spectrum Scholar and librarian at City of Irvine (CA) Public Libraries
Mayur Patel, Vice President for Strategy and Assessment, John S. and James L. Knight Foundation
Sharman Smith, Executive Director, Mississippi Library Commission
Michael Kelley, Editor in Chief, Library Journal
Disclaimer from the Bill & Melinda Gates Foundation
This report is based on research funded in part by the Bill & Melinda Gates Foundation. The findings and conclusions contained within are those of the author and do not necessarily reflect positions or policies of the Bill & Melinda Gates Foundation.
Methodology
Library Services Survey
Prepared by Princeton Survey Research Associates International for the Pew Research Center’s Internet & American Life Project November 2012
SUMMARY
The Library Services Survey, conducted by the Pew Research Center’s Internet & American Life Project and funded by the Bill & Melinda Gates Foundation, obtained telephone interviews with a nationally representative sample of 2,252 people ages 16 and older living in the United States. Interviews were conducted via landline (nLL=1,127) and cell phone (nC=1,125, including 543 without a landline phone). The survey was conducted by Princeton Survey Research Associates International. The interviews were administered in English and Spanish by Princeton Data Source from October 15 to November 10, 2012. Statistical results are weighted to correct known demographic discrepancies. The margin of sampling error for results based on the complete set of weighted data is ±2.3 percentage points. Results based on the 1,945 internet users5 have a margin of sampling error of ±2.5 percentage points.
Details on the design, execution and analysis of the survey are discussed below.
DESIGN AND DATA COLLECTION PROCEDURES
Sample Design
A combination of landline and cellular random digit dial (RDD) samples was used to represent all adults in the United States who have access to either a landline or cellular telephone. Both samples were provided by Survey Sampling International, LLC (SSI) according to PSRAI specifications.
Numbers for the landline sample were drawn with probabilities in proportion to their share of listed telephone households from active blocks (area code + exchange + two-digit block number) that contained three or more residential directory listings. The cellular sample was not list-assisted, but was drawn through a systematic sampling from dedicated wireless 100-blocks and shared service 100-blocks with no directory-listed landline numbers.
Contact Procedures
Interviews were conducted from October 15 to November 10, 2012. As many as 7 attempts were made to contact every sampled telephone number. Sample was released for interviewing in replicates, which are representative subsamples of the larger sample. Using replicates to control the release of sample ensures that complete call procedures are followed for the entire sample. Calls were staggered over times of day and days of the week to maximize the chance of making contact with potential respondents. Interviewing was spread as evenly as possible across the days in field. Each telephone number was called at least one time during the day in an attempt to complete an interview.
For the landline sample, interviewers asked to speak with the youngest male or female ages 16 or older currently at home based on a random rotation. If no male/female was available, interviewers asked to speak with the youngest person age 16 or older of the other gender. This systematic respondent selection technique has been shown to produce samples that closely mirror the population in terms of age and gender when combined with cell interviewing.
For the cellular sample, interviews were conducted with the person who answered the phone. Interviewers verified that the person was age 16 or older and in a safe place before administering the survey. Cellular respondents were offered a post-paid cash reimbursement for their participation.
WEIGHTING AND ANALYSIS
The first stage of weighting corrected for different probabilities of selection associated with the number of adults in each household and each respondent’s telephone usage patterns.6 This weighting also adjusts for the overlapping landline and cell sample frames and the relative sizes of each frame and each sample.
This first-stage weight for the ith case can be expressed as:
Where SLL = size of the landline sample
SCP = size of the cell phone sample
ADi = Number of adults in the household
R = Estimated ratio of the land line sample frame to the cell phone sample frame
The equations can be simplified by plugging in the values for SLL = 1,127 and SCP = 1,125. Additionally, we will estimate of the ratio of the size of landline sample frame to the cell phone sample frame R = 0.60. The final stage of weighting balances sample demographics to population parameters. The sample is balanced by form to match national population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), population density, and telephone usage. The Hispanic origin was split out based on nativity; U.S born and non-U.S. born. The White, non-Hispanic subgroup is also balanced on age, education and region. The basic weighting parameters came from a special analysis of the Census Bureau’s 2011 Annual Social and Economic Supplement (ASEC) that included all households in the United States. The population density parameter was derived from Census data. The cell phone usage parameter came from an analysis of the July-December 2011 National Health Interview Survey.78 Weighting was accomplished using Sample Balancing, a special iterative sample weighting program that simultaneously balances the distributions of all variables using a statistical technique called the Deming Algorithm. Weights were trimmed to prevent individual interviews from having too much influence on the final results. The use of these weights in statistical analysis ensures that the demographic characteristics of the sample closely approximate the demographic characteristics of the national population. Table 1 compares weighted and unweighted sample distributions to population parameters.
Effects of Sample Design on Statistical Inference
Post-data collection statistical adjustments require analysis procedures that reflect departures from simple random sampling. PSRAI calculates the effects of these design features so that an appropriate adjustment can be incorporated into tests of statistical significance when using these data. The so-called “design effect” or deff represents the loss in statistical efficiency that results from systematic non-response. The total sample design effect for this survey is 1.24.
PSRAI calculates the composite design effect for a sample of size n, with each case having a weight, wi as:
In a wide range of situations, the adjusted standard error of a statistic should be calculated by multiplying the usual formula by the square root of the design effect (√deff ). Thus, the formula for computing the 95% confidence interval around a percentage is:
where p is the sample estimate and n is the unweighted number of sample cases in the group being considered.
The survey’s margin of error is the largest 95% confidence interval for any estimated proportion based on the total sample — the one around 50%. For example, the margin of error for the entire sample is ±2.3 percentage points. This means that in 95 out every 100 samples drawn using the same methodology, estimated proportions based on the entire sample will be no more than 2.3 percentage points away from their true values in the population. The margin of error for estimates based on form 1 or form 2 respondents is ±3.3 percentage points. It is important to remember that sampling fluctuations are only one possible source of error in a survey estimate. Other sources, such as respondent selection bias, questionnaire wording and reporting inaccuracy, may contribute additional error of greater or lesser magnitude.
Response Rate
Table 2 reports the disposition of all sampled telephone numbers ever dialed from the original telephone number samples. The response rate estimates the fraction of all eligible respondents in the sample that were ultimately interviewed. At PSRAI it is calculated by taking the product of three component rates:9
Contact rate – the proportion of working numbers where a request for interview was made10
Cooperation rate – the proportion of contacted numbers where a consent for interview was at least initially obtained, versus those refused
Completion rate – the proportion of initially cooperating and eligible interviews that were completed
Thus the response rate for the landline sample was 11.4 percent. The response rate for the cellular sample was 11 percent.
- Internet user is defined based on those accessing the internet occasionally, sending or receiving email, and/or accessing the internet on a cell phone, tablet, or other mobile handheld device. ↩
- i.e., whether respondents have only a landline telephone, only a cell phone, or both kinds of telephone. ↩
- Blumberg SJ, Luke JV. Wireless substitution: Early release of estimates from the National Health Interview Survey, July-December, 2011. National Center for Health Statistics. June 2012. ↩
- The phone use parameter used for this 16+ sample is the same as the parameter we use for all 18+ surveys. In other words, no adjustment was made to account for the fact that the target population for this survey is slightly different than a standard 18+ general population survey. ↩
- PSRAI’s disposition codes and reporting are consistent with the American Association for Public Opinion Research standards. ↩
- PSRAI assumes that 75 percent of cases that result in a constant disposition of “No answer” or “Busy” are actually not working numbers. ↩
Source: libraries.pewinternet.org
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου