The Experts below are selected from a list of 139959 Experts worldwide ranked by ideXlab platform

Beth Sundstrom - One of the best experts on this subject based on the ideXlab platform.

  • A characterization of personal care product use among undergraduate female college students in South Carolina, USA
    Journal of Exposure Science & Environmental Epidemiology, 2020
    Co-Authors: Leslie B. Hart, Joanna Walker, Barbara Beckingham, Ally Shelley, Kerry Wischusen, Moriah Alten Flagg, Beth Sundstrom
    Abstract:

    Some chemicals used in personal care products (PCPs) are associated with endocrine disruption, developmental abnormalities, and reproductive impairment. Previous studies have evaluated product use among various populations; however, information on college women, a population with a unique lifestyle, is scarce. The proportion and frequency of product use were measured using a self-Administered Survey among 138 female undergraduates. Respondents were predominately Caucasian (80.4%, reflecting the college’s student body), and represented all years of study (freshman: 24.6%; sophomore: 30.4%; junior: 18.8%; senior: 26.1%). All respondents reported use of at least two PCPs within 24 h prior to sampling (maximum = 17; median = 8; IQR = 6–11). Compared with studies of pregnant and postpartum women, adult men, and Latina adolescents, college women Surveyed reported significantly higher use of deodorant, conditioner, perfume, liquid soap, hand/body lotion, sunscreen, nail polish, eyeshadow, and lip balm (Chi Square, p  

  • Correction: A characterization of personal care product use among undergraduate female college students in South Carolina, USA
    Journal of Exposure Science and Environmental Epidemiology, 2019
    Co-Authors: Leslie Hart, Joanna Walker, Barbara Beckingham, Ally Shelley, Moriah Alten Flagg, Kerry Wischusen, Beth Sundstrom
    Abstract:

    Some chemicals used in personal care products (PCPs) are associated with endocrine disruption, developmental abnormalities, and reproductive impairment. Previous studies have evaluated product use among various populations; however, information on college women, a population with a unique lifestyle, is scarce. The proportion and frequency of product use were measured using a self-Administered Survey among 138 female undergraduates. Respondents were predominately Caucasian (80.4%, reflecting the college’s student body), and represented all years of study (freshman: 24.6%; sophomore: 30.4%; junior: 18.8%; senior: 26.1%). All respondents reported use of at least two PCPs within 24 h prior to sampling (maximum = 17; median = 8; IQR = 6–11). Compared with studies of pregnant and postpartum women, adult men, and Latina adolescents, college women Surveyed reported significantly higher use of deodorant, conditioner, perfume, liquid soap, hand/body lotion, sunscreen, nail polish, eyeshadow, and lip balm (Chi Square, p 

Josip Car - One of the best experts on this subject based on the ideXlab platform.

  • comparison of self Administered Survey questionnaire responses collected using mobile apps versus other methods
    Cochrane Database of Systematic Reviews, 2015
    Co-Authors: Jose Marcano S Belisario, Jan Jamsek, Kit Huckvale, John Odonoghue, Cecily Morrison, Josip Car
    Abstract:

    Background: Self-Administered Survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource intensive than other data collection methods. These Survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a Survey questionnaire could affect the quality of the responses collected. Objectives: To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of Survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. Search methods: We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO)International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. Selection criteria: We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-Administered Survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-Administered Survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a Survey questionnaire; differences in respondent’s adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps. Data collection and analysis: Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which Survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where Surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents).We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. Main results: We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study. Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a Survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based Survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the Survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a Survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents’ daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. Authors’ conclusions: Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the Survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-Administered Survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, Survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.

  • The Cochrane Library - Comparison of self-Administered Survey questionnaire responses collected using mobile apps versus other methods.
    The Cochrane database of systematic reviews, 2015
    Co-Authors: Jose Marcano S Belisario, Jan Jamsek, Kit Huckvale, Cecily Morrison, John O'donoghue, Josip Car
    Abstract:

    Background: Self-Administered Survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource intensive than other data collection methods. These Survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a Survey questionnaire could affect the quality of the responses collected. Objectives: To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of Survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. Search methods: We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO)International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. Selection criteria: We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-Administered Survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-Administered Survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a Survey questionnaire; differences in respondent’s adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps. Data collection and analysis: Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which Survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where Surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents).We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. Main results: We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study. Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a Survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based Survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the Survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a Survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents’ daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. Authors’ conclusions: Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the Survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-Administered Survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, Survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.

Paul D. Blanc - One of the best experts on this subject based on the ideXlab platform.

  • Designing a gamma hydroxybutyrate (GHB) structured telephone-Administered Survey instrument
    Journal of Medical Toxicology, 2007
    Co-Authors: Jo E. Dyer, Ilene B. Anderson, Susan Y. Kim, Judith C. Barker, Paul D. Blanc
    Abstract:

    Introduction As part of a larger study assessing the covariates and outcomes of GHB use, we developed a telephone-Survey instrument for hospitalized GHB exposed patients identified through poison control center surveillance and for self-identified GHB users recruited from the general public. Methods We used an iterative review process with an interdisciplinary team, including pharmacists, a physician, and a medical anthropologist. In designing the structured, telephone-Survey instrument, we prioritized inclusion of validated, drug-specific, and generic questionnaire batteries or individual items related to GHB or to other drugs of abuse. Only one published Survey instrument specific to GHB use was identified, which we extensively expanded and modified. We also developed a number of GHB-specific items new to this Survey. Finally, we included items from the National Survey on Drug Use & Health, CAGE questionnaire items on alcohol abuse, the SF-12 instrument, and selected National Health Interview items. Results The final questionnaire consisted of 272 content items, the majority of which required simple yes or no responses. The bulk of the items (74%) were GHB-specific. The questionnaire was easily Administered using computer-assisted telephone interview (CATI) software. A total of 131 interviews were Administered with a mean administration time of 33±10 minutes. The instrument can also be used in other interview formats. Conclusion Developing a successful questionnaire calls for a multidisciplinary and systematic process. Structured, telephone Administered Surveys are particularly suited to expand and explore the basic information obtained by poison centers for case management.

  • Designing a gamma hydroxybutyrate (GHB) structured telephone-Administered Survey instrument.
    Journal of medical toxicology : official journal of the American College of Medical Toxicology, 2007
    Co-Authors: Jo E. Dyer, Ilene B. Anderson, Susan Y. Kim, Judith C. Barker, Paul D. Blanc
    Abstract:

    Introduction As part of a larger study assessing the covariates and outcomes of GHB use, we developed a telephone-Survey instrument for hospitalized GHB exposed patients identified through poison control center surveillance and for self-identified GHB users recruited from the general public.

Jose Marcano S Belisario - One of the best experts on this subject based on the ideXlab platform.

  • comparison of self Administered Survey questionnaire responses collected using mobile apps versus other methods
    Cochrane Database of Systematic Reviews, 2015
    Co-Authors: Jose Marcano S Belisario, Jan Jamsek, Kit Huckvale, John Odonoghue, Cecily Morrison, Josip Car
    Abstract:

    Background: Self-Administered Survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource intensive than other data collection methods. These Survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a Survey questionnaire could affect the quality of the responses collected. Objectives: To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of Survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. Search methods: We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO)International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. Selection criteria: We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-Administered Survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-Administered Survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a Survey questionnaire; differences in respondent’s adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps. Data collection and analysis: Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which Survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where Surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents).We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. Main results: We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study. Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a Survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based Survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the Survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a Survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents’ daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. Authors’ conclusions: Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the Survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-Administered Survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, Survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.

  • The Cochrane Library - Comparison of self-Administered Survey questionnaire responses collected using mobile apps versus other methods.
    The Cochrane database of systematic reviews, 2015
    Co-Authors: Jose Marcano S Belisario, Jan Jamsek, Kit Huckvale, Cecily Morrison, John O'donoghue, Josip Car
    Abstract:

    Background: Self-Administered Survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource intensive than other data collection methods. These Survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a Survey questionnaire could affect the quality of the responses collected. Objectives: To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of Survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. Search methods: We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO)International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. Selection criteria: We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-Administered Survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-Administered Survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a Survey questionnaire; differences in respondent’s adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps. Data collection and analysis: Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which Survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where Surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents).We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. Main results: We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study. Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a Survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based Survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the Survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a Survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents’ daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. Authors’ conclusions: Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the Survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-Administered Survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, Survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.

Kerry Wischusen - One of the best experts on this subject based on the ideXlab platform.

  • A characterization of personal care product use among undergraduate female college students in South Carolina, USA
    Journal of Exposure Science & Environmental Epidemiology, 2020
    Co-Authors: Leslie B. Hart, Joanna Walker, Barbara Beckingham, Ally Shelley, Kerry Wischusen, Moriah Alten Flagg, Beth Sundstrom
    Abstract:

    Some chemicals used in personal care products (PCPs) are associated with endocrine disruption, developmental abnormalities, and reproductive impairment. Previous studies have evaluated product use among various populations; however, information on college women, a population with a unique lifestyle, is scarce. The proportion and frequency of product use were measured using a self-Administered Survey among 138 female undergraduates. Respondents were predominately Caucasian (80.4%, reflecting the college’s student body), and represented all years of study (freshman: 24.6%; sophomore: 30.4%; junior: 18.8%; senior: 26.1%). All respondents reported use of at least two PCPs within 24 h prior to sampling (maximum = 17; median = 8; IQR = 6–11). Compared with studies of pregnant and postpartum women, adult men, and Latina adolescents, college women Surveyed reported significantly higher use of deodorant, conditioner, perfume, liquid soap, hand/body lotion, sunscreen, nail polish, eyeshadow, and lip balm (Chi Square, p  

  • Correction: A characterization of personal care product use among undergraduate female college students in South Carolina, USA
    Journal of Exposure Science and Environmental Epidemiology, 2019
    Co-Authors: Leslie Hart, Joanna Walker, Barbara Beckingham, Ally Shelley, Moriah Alten Flagg, Kerry Wischusen, Beth Sundstrom
    Abstract:

    Some chemicals used in personal care products (PCPs) are associated with endocrine disruption, developmental abnormalities, and reproductive impairment. Previous studies have evaluated product use among various populations; however, information on college women, a population with a unique lifestyle, is scarce. The proportion and frequency of product use were measured using a self-Administered Survey among 138 female undergraduates. Respondents were predominately Caucasian (80.4%, reflecting the college’s student body), and represented all years of study (freshman: 24.6%; sophomore: 30.4%; junior: 18.8%; senior: 26.1%). All respondents reported use of at least two PCPs within 24 h prior to sampling (maximum = 17; median = 8; IQR = 6–11). Compared with studies of pregnant and postpartum women, adult men, and Latina adolescents, college women Surveyed reported significantly higher use of deodorant, conditioner, perfume, liquid soap, hand/body lotion, sunscreen, nail polish, eyeshadow, and lip balm (Chi Square, p