exploratory efficacy for learning intervention – any suggestions on methods?

Home Forums Methodspace discussion exploratory efficacy for learning intervention – any suggestions on methods?

Viewing 1 post (of 1 total)
  • Author
  • #869
    Josh Whitkin

    I’m working on an unannounced video-game based intervention that (in addition to its original aim.

    This game was built to raise awareness of the need for independent living skills among foster youth. We are doing a good RCT for them…but the game may be having effects on many other topics for a general population aged 8-15.

    I’ve observed 100 kids playing it for 1 hour each, and based on my observations and unstructured chats with them while they played, I think the game could be affecting their awareness, attitudes and knowledge on:

     – drug & alcohol
     – peer social skills – “good” vs “bad” friendships
     – experience with adults – bosses, landlords

     – awareness of race and gender prejudice
     – independent living – practice

    It’s too a wide variety of topics. I’d like to focus on the most impactful ones…but how to choose? 

    Here’s the best two ideas I can come up with.

    Idea A. “shotgun” screening method, perhaps like:

    1. identify 1-3 top hypotheses for each topic (e.g. for drugs, “do users think strategically, not tactically, about cost/benefit of marijuana use, after playing?”)
    2. select a few items from existing instruments related to that hypothesis (ie, narrow down the many things to the most important measureable effect)
    3. run a ‘shotgun’ pilot study: e.g.test 20 kids at a time, with pre/post Likert surveys, across all topics in the same session, in a randomized, double-blinded controlled playtest (fill in form, randomly assigned either this game or a control game, play for an hour, fill in form again)
    4. If we see notable effects, do a better study – larger sample, etc (like Dana is doing). For the rest we could decide “nah, VSG is very unlikely to be having impact here.”
    Obviously this method has huge limits – e.g. it only measures impact from short play session, and only immediately after the play session; it’s asking teens to self-report subtle factors, etc. In short, I’m not convinced it’ll get me much data worth doing.
    Idea B: Ask a teacher or parent. I would pick 5 or 10 friendly teachers and parents of teens to watch their teen play, at home on their own schedule, for a few hours, over 1 week.  Then I conduct a semistructured 1-hour interview with the teacher/parent, that briefly covers ALL those topics. By having an adult intepret the reactions of a child they know well, I’m hoping to get around teens’ limited ability to self-report on these sensitive topics. Teachers and parents will add their own opinions, but those additions are valuable, as they are stakeholders. If a parent thinks the game is teaching money skills, well, maybe that’s reason enough to find out with a real study.
    Do you know other approaches that:

    1. explore. I’m happy with trending data – “shows promise”. If I find promising effects, I intend to prove it later, using more typical and rigorous methods,
    2. detect a broad range of effects, “good” or “bad”, in those very diverse domains,
    3. are objective and unbiased as possible. With this game, teen subjects often get excited and want to impress interviewers, so I fear they will bias things by expressing a basic happiness with the game overall in whatever topic I survey,
    4. are low budget. perhaps $200 (but kids will play for free) plus 10 hours of my time, over 6 weeks.  Assume I have access to various high schools and local living centers, as well as could put it online for kids to play in a browser for free.
    Thank you!
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.