Andreeva recently discussed several “challenges for research in poor countries” (Andreeva, 2012). Below is a list of some of these challenges and my comments. For many public health studies in low- or middle-income countries, population surveys are the only affordable means of data collection. Population surveys are valuable sources of health information. For example, surveys have estimated the prevalence, severity, and treatment of mental disorders in various countries, including Ukraine (WHO, 2004). But population surveys can be very expensive, so alternative approaches to data collection should not be overlooked. Some other methods are case control studies, ecologic studies, and qualitative research designs such as focus groups. A case control design was used by Donetsk State Medical University to investigate contraceptive practices and factors behind contraceptive preferences of Ukrainian women (Mogilevkina, 2003). For a case control study of diphtheria vaccine efficacy in Ukraine, demographic and vaccination data were gathered from health center records (Tsu, 2000). Focus group methods were used by the Ukraine Institute for Public Health Policy and others to investigate the obstacles to antiretroviral therapy perceived by HIV-infected injection drug users (Mimiaga, 2010) and in a separate study on the topics of everyday understanding of health and the factors influencing it (Abbott, 2006). Another challenge faced by survey scientists is related to the validity of self-reported data. Validity is central to all research. According to Bonita et al (2006, page 57), “A study is valid if its results correspond to the truth.” Self-reports can be satisfactory data sources if investigators take sufficient care in their design and use (Schaeffer, 2003). For example, before conducting health surveys in low and middle-income countries with questionnaires that were developed for use in high-income countries, researchers may first want to use focus group interview methods to gauge what the survey questions mean to people in the target countries (Kitzinger, 1995). If necessary, focus groups can be used to help researchers to modify question wording appropriately. In any case, for many health measures, it is difficult to think of an alternative to self-reports. The recent finding that fewer teenagers in the United States are driving after drinking, for example, comes from risk behavior data collected from thousands of high school students through national surveys (Shults, 2012). Due to high subscription fees, many researchers in low- and medium income countries lack access to necessary literature. This is a serious obstacle but it has a partial, temporary solution. In 2002 the Access to Research in Health Programme (HINARI) was established by the World Health Organization in partnership with major publishers (http://www.who.int/hinari/en/ accessed 4 Oct 2012). This venture provides free or low cost online access to the major journals in biomedical and related social sciences to local, not-for-profit institutions in developing countries. Some 8,500 journals and 7000 e-books (in 30 different languages) are now available to health institutions in more than 100 countries. To move access to global knowledge beyond HINARI, an international team of editors, researchers, and authors has proposed that WHO take the lead in championing the goal of “health information for all” (Godlee, et al., 2004). Besides HINARI, researchers in some developing countries have gained access to scientific literature through partnerships with foreign researchers as, for example, in projects supported by the Fogarty International Center of the US National Institutes of Health (http://www.fic.nih.gov, accessed 4 Oct 2012). For persons interested in tobacco control, an inventory of financial and structural resources to support global tobacco control research and research capacity in developing countries is available (Lando, et al., 2005). Many decisions in low and middle income countries are still opinion-based. Alas this is also all too often the case in the US and Europe. For example, little policy has developed in response to the growing threat from climate change to the health and the environment. The process from the discovery of scientific knowledge to its effects on human behavior is usually long and unpredictable. Current epidemiology training focuses on epidemiologic methods, with little attention on how the science of epidemiology is translated into effective health policy (Brownson, 1998, page 377). Actually, research findings always have some degree of uncertainty, and policy choices depend on many social, cultural, and economic factors, including people’s opinions and beliefs. Fortunately, expert guidance is available on ways to communicate research findings to the public and policymakers that increase the chance that good science will result in good public health (Nelson, 2011; Remington, et al., 2011; Brownson, et al., 2011). A somewhat contrary view is that researchers are not responsible for the translation of their findings into public policy and should enter the political fray cautiously (Rothman & Poole, 1985). The golden standard of studies generating such evidence is randomized controlled trials. Bonita et al (2006, page 95) distinguish between various study designs by ranking their ability to provide evidence for causality between an exposure and a disease: “strong” for randomized controlled trials, “moderate” for cohort and case-control studies, and “weak” for cross-sectional and ecological studies. However, Steven N Goodman of Stanford University and Gerald J Dal Pan of the US Food and Drug Administration, speakers at the 2012 American College of Epidemiology Annual Meeting, indicated that the traditional hierarchy of scientific evidence may be too simple. They argued that experiments have more limits than generally appreciated, and evidence from observational studies can also be “golden”.In any case, research conclusions have historically lacked widespread credibility in the scientific community until they have been confirmed by multiple studies using different study designs in different populations. They consider this as public health surveillance rather than data for testing research hypotheses about effects of the intended policy measures. I would agree that some surveys, such as the tobacco prevalence survey in several eastern European countries, including Ukraine (Andreeva, et al., 2010), are a type of public health surveillance. However, such data collection activities differ from traditional disease surveillance systems that detect and investigate new cases of notifiable diseases, including tuberculosis, measles, and others (Bonita, et al., 2006). For several decades, the US Agency for International Development has funded the Demographic and Health Surveys (DHS) that collect nationally-representative household data for a wide range of monitoring and evaluation indicators on population, health, and nutrition (http://measuredhs.com/accessed 3 Oct 2012). These surveys have been completed in Ukraine and a half dozen other countries of the former Soviet Union, and all DHS countries, especially ones with repeated surveys, have results that can be assessed with relevance to a health policy. I am reluctant to classify prevalence surveys as “descriptive” or “analytic” without more information about the specific survey. In the US, the Behavioral Risk Factor Surveys--conducted annually in the 50 states with coordination and support from the US Centers for Disease Control and Prevention--have been used for both description and analysis. Smart people from poor countries will definitely benefit from considerate reviews of their studies by more experienced researchers. A core principle of global health is that the knowledge and experience of every country, regardless of income level, is required for truly effective public health science and action. Journals have a mission, and they will publish work from any country if it fits their mission. Some journals explicitly invite submissions from developing countries including papers from authors whose mother language is not English. On the other hand, journals have major limits (Nordstrom, 2008). To protect their resources, they routinely reject some manuscripts without circulating them for external review because the editor determines they have little chance of acceptance. Most journals have no paid staff, and most peer reviewers are volunteers. An editor of one western journal has candidly discussed the challenges and opportunities of reviewing and publishing research manuscripts from developing countries (Malone, 2012).