AL Forum

AL Forum News, Volume 28:2 (May 2008)

by User Not Found | 11/02/2011
AL Forum

In This Issue...

  • Leadership Updates
    • Note from Chair
    • Note from Outgoing Chair
    • Letter from the Editors
  • Articles and Information
    • Using WebQuests to Foster Cultural Autonomy in International Teachers
    • Bringing Culture Into the Classroom Through Video Exchanges
    • Finding the Right Accommodations for Assessing English Language Learners
    • Research on Standardized Language Testing and Young English Language Learners

Leadership Updates Note from Chair

Ali Shehadeh, Department of General and Applied Linguistics/TESOL, United Arab Emirates University,

Now that I have recovered from the jetlag as well as from the many commitments I had at TESOL 2008 in New York (giving two presentations, chairing a 3-hour Academic Session, attending nine meetings for the ALIS and the Awards Standing Committee, attending the TESOL Quarterly Editorial Board meeting, and attending several Discussion Group sessions I had organized for the ALIS), I will relate some news of interest from TESOL New York and for TESOL Denver.

ACADEMIC SESSION: This year’s Academic Session was on “Comprehensible Input, Comprehensible Output and L2 Learning.” A panel of distinguished scholars in the field contributed to the session: Diane Larsen-Freeman, Andrew Cohen, and Teresa Pica. These scholars looked at comprehensible input and output and their role in L2 learning as complementary, both serving to facilitate L2 learning and making L2 teaching more effective. The session was very interesting, informative, and lively. It was well attended. Most attendants stayed to the last minute and even lingered around after the session for an informal chat with the presenters. Because the 120 or more handouts ran out, several attendants requested an electronic copy of the presentations. The presenters kindly agreed to send e-copies of their presentations to me, which they did. (If you’re interested, just e-mail me at I’ll be happy to forward a copy to you.) The presenters also agreed to write a summary of their contributions for the November issue of the newsletter. So watch out for these in the fall issue of the newsletter.

DISCUSSION GROUPS: I tried to visit as many of the 12 Discussion Group sessions I organized for the ALIS as I could. However, because of schedule conflicts, I was unable to attend some of these. Most of the sessions I visited were well attended. Because an interactive mode of delivery was prevalent in these sessions, the audience felt that they were more actively involved in participating in and contributing to the topics raised by the session presenters.

ALIS BUSINESS MEETING: A new chair-elect, a new newsletter coeditor, and a webmaster were voted in at the ALIS business meeting. Several other issues were also discussed. The details of the meeting are presented in Howard Williams’ report, in this newsletter.

TESOL 2009: At TESOL 2009 in Denver, the ALIS will sponsor an InterSection with the Intercultural Communication IS on plagiarism, culture, and ethics. We were also invited to join InterSections sponsored by two other ISs: the Second Language Writing IS and the Materials Writing IS. With the former we will have an InterSection on writing across the curriculum and applied linguistics, and with the latter an InterSection on what insights can be gained from applied linguistics research for materials writing. I’m quite excited about these three InterSections, I must say. I’ll write more on these in the November issue of the newsletter.

Salam (so long!),

Note from Outgoing Chair

Howard A. Williams, Teachers College, Columbia University,

THE 2008 CONFERENCE: The New York City conference is now behind us. Even some of us who live in New York are still recovering from the experience. Though we have not received figures, there seemed to be a larger-than-normal attendance this year, which can both raise the energy level and deplete it. The number of concurrent sessions made it hard for many of us to attend everything we had hoped to; we await feedback on the compressed schedule, which helped to keep monetary costs down even as it made days longer. Our Academic Session, entitled “Comprehensible Input, Comprehensible Output, and L2 Learning,” was very well attended, and people reported attending many other worthwhile sessions.

Members have been asked to submit feedback on the New York conference to an interest section leader, possibly in the form of recommendations to the Central Office or the Board. You may send your comments to me at, and I will forward them to TESOL.

ALIS BUSINESS MEETING: At Thursday’s business meeting at the conference, Ali Shehadeh formally moved up to ALIS Chair position. Scott Phillabaum’s term as a coeditor of the newsletter ended at the conclusion of TESOL New York, but he was nominated as chair-elect and was voted in, unopposed. A warm welcome to Scott, who has been teaching at Cal State Dominguez Hills and will be joining the Linguistics and Language Development faculty at San Jose State University in the fall. Lorena Llosa, the current coeditor, was joined by Priya Abeywickrama from San Francisco State University.

David Olsher reported on the activities of the Research Committee, which was formed 2 years ago when the Research Interest Section was dissolved. The purpose of the new committee is to promote careful research across all interest sections and to advise TESOL as a whole on research-related matters; the committee will also provide workshops for those interested in pursuing research. At this year’s conference was a Research Fair with presenters Tom Scovel, Anne Burns, Karen Johnson, and Bonny Norton as well as four research-related panels.

The issue was again raised about whether the definition of applied linguistics as stated in two places on the TESOL Web site adequately expresses what we envision as falling within the scope of applied linguistics (AL). There appeared to be general satisfaction with the existing definition with no sense that any subject area was being wrongly included or excluded.

Though several resolutions were considered, only one was actually read. It addressed the fact that TESOL’s web page and preliminary brochure for each year’s conference mention (and de facto promotes) only those lodging options that are connected with the convention site itself, possibly leaving many prospective attendees with the impression that these hotels are the only nearby or reasonable lodging options. The ethical question around an exclusive promotion of these higher priced hotels was raised, as conference expenses may prevent many members (especially those from other parts of the world) from attending the conference. The resolution called for a listing of less-expensive options or a web link to the same. The resolution was approved for presentation at the Interest Section Leadership Council (ISLC) meeting.

ALIS member Rosane Silveira has taken responsibility for creating a blog on which AL people can pursue specific AL-related interests. We imagine that these might break down according to topic areas such as conversation analysis, SLA, grammar, and so on. There will be more to say once the project is under way.

ISLC MEETING: Three resolutions came up for vote at the ISLC meeting on Friday. As you may know, caucuses have now been dissolved (for legal reasons, we are told), and two members presented resolutions that would create new interest sections from the remnants of two old caucuses. A solid majority approved that the caucus on Nonnative-Speaking Teachers of English become an IS; another resolution, from Teachers for Social Responsibility, was read but not voted on. Finally, ALIS’s resolution on hotels was narrowly defeated, even with minor revisions; dissenters argued, in part, (a) that members should do their own sleuthing to find affordable lodgings and (b) that it would not be feasible for TESOL to provide comprehensive information on reasonably priced lodging in a city. The first caucus resolution will presumably be read before the board at next year’s business meeting.

SCHOLARSHIPS: At the ISLC meeting, a council member announced TOEFL grant and award opportunities through Educational Testing Service. To quote their flyer, “Funds are available for doctoral students, MA students, new ESL/FL teachers, young scholars (under 40 years old), researchers, conference planners, international educators, libraries and resource centers in EFL settings, and TESOL affiliates and other professional organizations.” Grants and awards range from $2,000 to $15,000. For more information about applying, go to and follow these links: (1) “Research”; (2) “TOEFL Research”; (3) “For English-language Teachers”; and (4) “Grants & Awards.” You may also send an e-mail to

At this point, let me again express my appreciation for having had the opportunity to serve as your interest section leader these past 2 years!

Letter from the Editors

Scott Phillabaum, California State University, Dominguez Hills,, and Lorena Llosa, New York University,

Welcome to another issue of ALIS Forum! In this issue (28.2) we are pleased to feature articles on two current topics in language education: the use of technology and assessment.

Reporting on the use of cultural WebQuests in an international teacher “summer camp” setting in West Africa, Donna Brinton describes how targeted Internet searches for specific information can enhance EFL students’ access to rich language and information about the target culture. Such activities, she argues, are especially valuable where participants have little access to native speakers and lack information on the target language and culture. Similarly, Joyce Cunningham discusses an innovative approach to cultural exchanges using student-generated videos. She describes in detail the implementation of a video exchange project between her university in Japan and a number of universities abroad. As with the activity described in Brinton’s piece, these video exchanges provide students who otherwise would have little access to interaction in English with opportunities to use English in meaningful ways. Both are wonderful proposals for EFL practitioners and teacher educators alike, allowing access to relevant cultural information as well as meaningful interaction in the target language.

The first article on assessment, written by Sarah Finck, provides an overview of the existing research on test accommodations for K-12 language learners in the United States. Because English language learners are no longer exempt from taking standardized tests, accommodations are a means of allowing these students to demonstrate their actual content knowledge by lessening the language load of standardized tests. Finck focuses on two types of accommodations, linguistic simplification and dictionary use, and their differential effect for subgroups of English language learners. Kim Woo’s article, also on assessment, discusses the unique considerations involved in assessing young language learners. Her article reviews the literature related to standardized language testing and young language learners, and presents a case for increased research to inform the development and use of valid standardized language tests for this population.

Finally, this issue marks the end of Scott’s tenure as coeditor of the ALIS Forum. He has enjoyed his 3 years in this position, and is especially grateful to the two coeditors with whom he has worked: Stefan Frazier and Lorena Llosa. Without a doubt, Lorena and her new coeditor, Priyanvada Abeywickrama, will maintain the newsletter’s tradition of publishing interesting and thought-provoking pieces. Scott will continue to be an active participant in the interest section, though, as he moves into his new role as chair-elect.

We thank all submitting authors for their contributions and hope you’ll enjoy this issue of the ALIS Forum!

See you at TESOL next year in Denver!

Articles and Information Using WebQuests to Foster Cultural Autonomy in International Teachers


A WebQuest engages learners in conducting targeted Internet searches for specific information. The author reports on the use of cultural WebQuests in an international teacher “summer camp” setting in West Africa, where participants have little access to native speakers and lack information on the target language and culture. Included in the discussion are procedures for conducting cultural WebQuests, sample WebQuest activities, and outcomes of the project.

Using WebQuests to Foster Cultural Autonomy in International Teachers

Donna M. Brinton
Professor of TESOL
Soka University of America
Aliso Viejo, CA

Worldwide, non-native English speaking teachers (NNESTs) comprise up to 80% of all those teaching English as a second/foreign language (Canagarajah, 1999). Access to the target language and culture remains an issue for the vast majority of NNESTs who teach in English as a foreign language (EFL) contexts, as many have little access to native speakers and have not had the opportunity to experience the target culture firsthand.

One form of teacher professional development that is available in some contexts is the increasingly popular “summer camp” for teachers, during which participants benefit from sessions conducted by foreign “experts” in the field of TESOL/Applied Linguistics. Typically, the main focus of these camps is on updated teaching methods, assessment measures, and materials/curriculum development. However, as a byproduct the participants also benefit from the increased access to the target language and culture that the native speaker trainers provide.

In my experiences conducting such in-service sessions, especially in low resource environments such as Central Asia and West Africa, I find the participants to often share the following characteristics:

1. developing English language skills;
2. minimal access to native speakers and the target language;
3. limited knowledge of the target culture; and
4. lack of confidence in their ability to use technology.

To address the above, I decided to include WebQuests as one component of the training programs I conducted in Senegal and Mali during the summers of 2005 and 2006. A WebQuest is an “inquiry-oriented activity in which most or all of the information used by learners is drawn from the Web. WebQuests are designed to use learners' time well . . . and to support learners' thinking at the levels of analysis, synthesis, and evaluation” (Dodge, n.d.). Further, they:

1. allow participants to research a specific question or topic using the World Wide Web;
2. guide them through the discovery process;
3. expose them to authentic language and real-world information; and
4. help them access the target culture.

My decision to include WebQuests was based on the ready availability of Internet cafés in both countries, leading to affordable and convenient teacher access to the Internet along with the availability of computer facilities in the centers where I conducted the training. As well, it was based on my intuition that English teachers in this setting would have little prior knowledge about the U.S., its language, and its culture. My goals in including this component were for the participants to 1) gain increased facility with Internet searches; 2) build confidence in their ability to locate information about the target language and culture, and 3) in the future, be better able to respond to questions from their students relating to the target language and culture.

Most participants began the training with very rudimentary knowledge of the Internet. Some reported using the Internet to send e-mails to friends while others reported reading the news in French on the Internet. A few had rudimentary knowledge of how to conduct searches, though this was restricted to searches in French. By the end of the two-week training, however, all exhibited facility with Internet searches and were excited about their new-found ability to navigate the Web.
To prepare the participants for conducting cultural WebQuests, I first devoted several sessions to using basic search engines (e.g., Google, Yahoo) and metasearch engines (e.g., Dogpile, Webcrawler, Kartoo). We focused on using advanced search features to conduct more efficient searches. To this end, participants practiced locating information using search features such as “all of the words,” “exact phrase,” and “at least one of the words.” We then compared the results of the searches conducted and discussed which of the features yielded the most efficient results.

Once the participants were comfortable searching for information, we began a series of cultural WebQuests (one a day, with 90 minutes devoted to each day’s search). For each day, I selected a general topic, listing 16 questions. I paired participants heterogeneously (i.e., participants with some previous Internet experience with those with little or none) and awarded small prizes (e.g., pocket dictionaries, flash cards, notebooks) for the pair first able to correctly answer all 16 questions.

The inclusion of prizes generated a healthy, competitive atmosphere and added to the participants’ enjoyment of the activity. Some sample topics and questions follow:

1) US geography – Select a state and find its…

a) nickname
b) current population

2) US traditions

a) What is a rodeo? Name three typical events that take place at a rodeo?
b) What is a soap opera? Name two famous American soap operas.

3) Americans in the news

a) Who is Arnold Schwarzenegger? What position does he currently hold? Name two things he did in his past life
b) Who is Dr. Dre? What is his real name? Name two people he has promoted.

4) Save the environment

a) What are Get Green’s top 5 tips for saving the world?
b) What is the most polluted city in the US?

5) American popular culture

a) Who is Bart Simpson? What color is his mother’s hair?
b) What is a couch potato? What does he/she do for fun?

6) Hollywood – For each phrase, name the movie, the actor/actress who uttered the phrase, and your guess as to its meaning in popular culture.

a) You talking to me?
b) Hasta la vista, baby!

At the end of each WebQuest session, we devoted approximately 10-15 minutes to the sharing of findings. Participants turned in their written responses to me so that I could determine the “winners” of the WebQuest (usually 2 or 3 pairs of students) and award the prizes that I had brought from the States. We also went over the responses orally and I used this opportunity to answer any questions or provide additional information. Due to the time limitations in the program, I was not generally able to use the WebQuests as points of departure for in-depth lessons on U.S. culture. However, in one case (the Hollywood WebQuest) we followed up with a viewing of key scenes from the film Gone With the Wind (including the scene where Rhett Butler utters the number one most popular film quote of all times: “Frankly, my dear, I don’t give a damn.”).

Overall, I was very pleased with the success of the WebQuest component of the program. However, I had not adequately anticipated the participants’ difficulty locating specific information on a page of written text. In the U.S. geography search, for example, several teams of participants had difficulty locating the state’s population, nickname, etc. even though they located the state’s official website where this information was clearly displayed on the page. In the future, I would devote some time to helping participants scan for information on a webpage prior to beginning the actual WebQuest activities. I had also not anticipated the need to orient participants to the advantages of collaboration in their WebQuests. As a result, some teams collaborated better than others. More guidance on how to work collaboratively prior to beginning the WebQuest activities would no doubt be advisable.

Some of the advantages offered by WebQuests include the following. They:

1) encourage learner inquiry (Egbert, 2005);
2) build autonomy, convincing NNESTs that they can access the target culture without help from native speaker informants;
3) enable NNESTs to explicate culture both in planned (culture-based) and unplanned lessons, i.e., in response to student questions (see Lazaraton, 2003); and
4) provide a highly effective tool in EFL teacher development programs.

Also, when conducted in teams as in the project I outline above, they build collaborative skills.

In sum, I strongly encourage the inclusion of WebQuests in international teacher in-service programs. Where feasible, I believe that WebQuests can add an important dimension to the “summer camp” teacher training curriculum in that they provide participants with a window into U.S. culture that was not previously available to them. By conducting the WebQuests, participants learn to conduct highly targeted searches using key Internet search engines, thus providing them with a new skill set that will serve them well in their future careers. Finally, WebQuests allow international teachers access to authentic information and texts about U.S. culture. This has obvious advantages over the more traditional U.S. culture lesson, where often stereotyped information about the U.S. is provided in the form of edited readings and follow-up discussion questions.

Canagarajah, A. S. (1999). Resisting linguistic imperialism in English teaching. Oxford, England: Oxford University Press.
Dodge, B. (n.d.) The webquest page. Retrieved March 17, 2007, from
Egbert, J. (2005). CALL essentials: Principles and practice in CALL classrooms. Alexandria, VA: Teachers of English to Speakers of Other Languages.
Lazaraton, A. (2003). Incidental displays of cultural knowledge in the nonnative-English-speaking teacher’s classroom. TESOL Quarterly, 37(2), 213-245.

Bringing Culture Into the Classroom Through Video Exchanges

Joyce Cunningham, Ibaraki University, Japan,

Five-minute video clips, generated by small groups of students and exchanged between classes, are exciting English as a foreign language projects. Video exchanges have been around for some time, but novel twists in this particular activity can make it more worthwhile. This article describes facilitating direct contact with another culture, describes the context in Japanese universities, and then refers to the challenges and cultural discoveries in creating such projects.

Over the years, the video project has helped freshmen in the Communications Department of Ibaraki University, Japan, to use English more actively. Collaborating in small groups, our learners make short videos destined for a real target audience. Initially, these videos were exchanged with other Japanese universities, but students felt somewhat uncomfortable. Explaining about their university or hometown to those with the same language seemed too contrived. However, when the destination of these videos shifted to a non-Japanese audience, motivation rose substantially. These exchanges, which took place with Dubai, Oman, or Abu Dhabi in the U.A.E., and with McGill and Sherbrooke Universities in Quebec, Canada, facilitated substantial interaction.

The immediate appeal of the video project is that it captures the attention of students excited by the possibility of novel contacts with international peers. They explore another culture through explaining their own culture and thus learn more about themselves in the process. All are eager to discover the values, ideas, interests, and customs of foreign partners. As Sowden (2007) stated: “the more we understand the world, human relations, and ourselves, the better able we will be to empathize with others and make connections” (p. 309). It is therefore beneficial and advisable to accompany the video exchange with an e-mail exchange or other forms of Internet communication. With direct weekly correspondence, learners have a special window of opportunity to break through international barriers. As Kouji, one of the students in the course, wrote his partner:

Through this exchanging email, I could know about you. In addition, I could know about the cultures and traditions in Dubai. I could enjoy the differences between Japan and Dubai. I had interest. It was little hard for me to send email in English, but I was happy to exchange much information to you.

These exchanges afford fascinating opportunities for questioning and explaining customs. In addition, students realize misunderstandings can occur if they fail to keep an open and inquisitive mind. Perhaps one of the greatest lessons is that right or wrong can be relative. As another student, Yumi, wrote: “It is difficult to understand other country’s cultures. There may be a unique culture which you have never met until now, or you find it hard to accept. However, you mustn’t think that only your culture is right.” This stretching of horizons is valuable learning indeed.

This project is particularly exciting and challenging for Japanese freshmen at the university, who arrive in class with a certain amount of anxiety about speaking English communicatively. Prior to university, these students have been exposed to English primarily through grammar-translation methods.

Creating the Videos
As McDonough (2007) pointed out, it is important to provide “a supportive and challenging learning environment facilitating the development of the learners’ own motivational thinking” (p. 370), especially one where learners work together to delve deeper into their own culture and that of others. To this end, students are encouraged to choose areas of interest.

The basic procedure is as follows: After students are introduced to the exchange with clips from previous classes, topics are brainstormed about traditional or modern Japan. Choices have ranged from traditional seasons or food to the modern use of cell phones, music, and so on.

Initially, newly formed groups should negotiate main ideas and details. Ideas are rendered more visual and precise with a storyboard focusing on key words, timing, props, characters, camera positions, and so on. Logs are kept to report briefly on work accomplished and the amount of English used. Every few classes, short progress reports permit the instructor to intervene with problems or find solutions for props, locations, and so on in a timely manner. Two or three Internet sources are researched and reported on. Thus, learners add facts and concrete examples to their information for, all too often, students know their own culture only superficially. Emi wrote:

I could know many things about tea ceremony and kimono. In spite of being Japanese, I didn’t have knowledge about Japanese kimono. However, I became to be able to explain about it for other people. In addition, I have more interest about kimono.

Before the project is begun, a role-play practice is advisable for all to become familiar with the equipment and simple filming techniques. This initial scaffolding increases awareness of pronunciation, volume of voice, and body language. Students are exposed to time management and cooperation in small groups. It encourages them to experiment with distancing themselves somewhat from reading a memorized script and accustoms them to standing in front of a camera. Shyer (or weaker) students can choose roles of director or camera person.

Before final filming, dress rehearsals are held. Ample praise and gentle suggestions for improvement can do much to shore up lagging confidence and enthusiasm. It is a good opportunity to stress the importance of efforts, creativity, and communication in English and that no actual marks are given for acting skills. Finally, the instructor can check on timing, props, batteries, whether permission to film has been requested, and so on.

With fewer students in class, the instructor can concentrate on those still completing scripts or dialogues. Scripts can be memorized or, in the case of my Japanese students who live in fear of mistakes, notes with keywords are made. They are reminded that forgetting is perfectly acceptable because a clip can be refilmed. Learners should also focus on body language for, on occasion, my students have appeared somewhat wooden to other cultures. However, such comments from foreign peers heighten awareness about improving nonverbal communication.

A volunteer can introduce the Imovie2 video editing software while the instructor works with slower groups. English music can enhance and add atmosphere to video clips and requires thinking about the meanings of chosen songs. When editing is finished, the video clips are sent via CD or put on the Internet. While waiting for the exchange class’s comments, my students are eager to view their own productions, although these are not critiqued. Each group takes their turn to give a brief presentation of what happened behind the scenes, the reasons for their choice of topic, difficulties encountered and solutions to these problems. Using notes and keywords, learners have yet another opportunity to present to a real audience.

Viewing and commenting begin with the arrival of the exchange country’s videos. As Cunningham and Batten (2003) explained: “a lot of excitement is generated in this mutual assessment. For once, learners are processing real information that has not come from the teacher” (p. 14). Each group prepares constructive comments as well as suggestions for improvement and then shares them with the class. It is truly valuable to know how information is received from a foreign perspective. It is instructive for all to reflect on these perceptions to increase awareness of self and others.

After the exchange video is viewed, the best three groups are voted on. Categories can include best overall video, best production (camera, sound, editing), most creative video, and so on. When giving advice or feedback, Japanese students may be reluctant to write what they truly think if they find a clip lacking, preferring to overlook rather than hurt. This project offers opportunities to learn to negotiate more clearly not only within the groups but also with participating classes.

An Academy Awards ceremony is held, and small awards are presented to the three best groups nominated by the exchange class. These groups have first choice of awards followed by all others. From the start, motivation is somewhat higher to win these “prestigious” awards.

A few last twists make these student-centered tasks easier to mark. Portfolios are compiled because it can be difficult to gauge actual participation. Portfolio entries include e-mail/WebCT letters, Skype/MSN logs, brainstorming, storyboards, notes/script, creative cover pages, vocabulary and class logs, progress reports, and final self-evaluations, as well as printouts of the e-mail and Skype/MSN exchanges.

Three- to five-minute poster or PowerPoint bio presentations are created about e-mail/WebCT/Skype/MSN partners to ensure that participants remain more actively engaged in exchanging information and acquire an accurate profile of the partner as well as reporting on cultural learning during the contact. Emiko wrote:

Listening the presentations of other students, I noticed that the good points of them. Some people’s slides were very visually using animation, others were speaking with some body languages. Every topic was very interesting. I learned a lot from them and really enjoyed all of them.

The audience comments individually or in small groups. Assessment of the effectiveness of each talk addresses content, timing, clarity (pronunciation or organization), delivery, or creativity. More simply, students can note keywords and evaluate using a scale from 1 to 10 with peer feedback returned afterward.2

Final self-evaluations encourage reflection on participation, enjoyable activities, and learning as well as comments on improvements to make in their performance or the project. As Keiko mentioned:

First I could learn a lot of things in this project. I noticed my preconceived idea of Dubai were wrong. I thought almost people are living in desert. I didn’t know that Dubai is such a developed country and developed very very quickly. When I know that I was very surprised. Second, […] We have quite difference cultures. Every thing was fresh for me I could know how Muslims spend Ramadan, why it is important for them what they think about it, what school is like, what they do holiday and so on. Some are different from us and others are similar.

Needless to say, the project is challenging and demanding, and in monolingual classes such as ours, a certain amount of Japanese is spoken, especially at the brainstorming, filming, or editing stages. However, group dialogues or notes, portfolios, final reflections, the e-mail/WebCT exchange, Skyping, and class presentations have all occurred in English. Many recognize progress has been made.

In this project, freshmen at Ibaraki University are learning valuable life skills such as negotiation, delegation of responsibility, and time management as well as other useful and practical skills such as use of the Internet, Skype/MSN, and basic camera equipment and video editing. Collaborating in small groups to create self-generated dialogues with graphics, sound, and music not only helps students improve their English, but can also bring out hidden talents and lead to self-discoveries. As Mitsuko said:

I learned many things about Japan. Lastly, I felt an advance of my English ability as compared with what I was in April. If I were in early April, I would [not] be able to do it. Maybe, I was too embarrassed to speak to everyone in English. I think I want to advance my English ability. Therefore, I try to do challenge more from now on.

Note and Acknowledgment
If interested in exchanging videos in the fall of 2008, please write to
My thanks to the General Education Department, English section, at Ibaraki University.

Cunningham, J., & Batten, J. D. (2003). Intercultural video presentations. The Language Teacher, 27(6), 11-16.

McDonough, S. (2007). Motivation in ELT. ELT Journal, 61(4), 369-371.

Sowden, C. (2007). Culture and the ‘good teacher’ in the English language classroom. ELT Journal, 61(4), 304-310.

Finding the Right Accommodations for Assessing English Language Learners

Sarah Finck, New York University,

Accommodations have been introduced in order to help make English Language Learners’ (ELLs) performance on standardized tests a more accurate indication of their content knowledge and less a reflection of their English ability. This article discusses two types of accommodations, linguistic simplification and dictionary use, and their differential effect for subgroups of ELLs.

Standardized testing is a reality for students in the United States today whether they have achieved English proficiency or not. Prior to the No Child Left Behind legislation of 2001, which mandated the testing of all students in order to hold schools and districts accountable for making adequate yearly progress toward high educational standards, English language learners were often excluded from large-scale testing. Now that exemption from testing is no longer an option, testing accommodations have been introduced to help make English language learners’ performance on standardized tests a more accurate indication of their content knowledge and less a reflection of their English ability.

Coming in many shapes and sizes, accommodations are “support provided students for a given testing event, either through modification of the test itself or through modification of the testing procedure, to help students access the content in English and better demonstrate what they know” (Butler & Stevens, 1997, p. 5). The most commonly researched accommodations include linguistic modification or simplification; glossaries and customized dictionaries; published dictionaries; dual-language test booklets; translated tests; and oral administration. Yet even with a constantly growing body of research on the subject, it is becoming clear that we still do not know enough about which accommodations are most effective for which test-takers; in particular, it appears that proper assignment of accommodations may be a function of each student’s individual language and background characteristics.

This article discusses two types of accommodations, linguistic simplification and dictionary use, which are among the most researched accommodations for English language learners (Abedi & Gándara, 2006; Abedi, Hofstetter, & Lord, 2004), as well as the differential effect of these accommodations for different subgroups of English language learners. The article concludes by proposing future research needed to further our understanding of how English language learners interact with tests and how accommodations can help improve the validity of the interpretations that are made on the basis of their test performance.

Linguistic Simplification
Finding a way to level the playing field between English language learners and non-English language learners is challenging, but linguistic simplification has proven to be an effective accommodation for ensuring that the tests measure what they set out to measure, namely, content area knowledge. Research has shown that the linguistic complexity of test items can be a large contributing factor to the performance gap between English language learners and non-English language learners on assessments of math and science knowledge (Abedi, Lord, & Plummer, 1997); in particular, long noun phrases, passive voice constructions, long question phrases, prepositional phrases, subordinate clauses, conditional clauses, relative clauses, abstract nouns, and negation (Abedi et al., 1997; Rivera & Stansfield, 2004) are problematic for English learners who have lower syntactic awareness skills than do non-English language learners (Abedi & Gándara, 2006). Consequently, linguistic simplification, or modification, of test items has been identified as a solution to this problem of complexity. By systematically rewriting instructions and questions so as to eliminate or change the above structures and make the language more straightforward, construct irrelevant variance is reduced and English language learners perform better while non-English language learners remain relatively unaffected.

Several studies provide support for the validity of linguistic modification, and they lead us to see that the effectiveness of the accommodation depends on a handful of factors. Abedi, Lord, Hofstetter, and Baker (2000) looked at four different accommodation strategies (linguistic simplification of test items; glossary; extra time; and glossary plus extra time) and their effect on the National Assessment of Educational Progress (NAEP) Mathematics test for eighth-grade students in southern California. Though the glossary plus extra time accommodation had the greatest impact, it helped English language learners as well as non-English language learners, thus not narrowing the achievement gap. When no extra time was provided with a glossary, English language learners’ scores were actually lower, possibly as a result of information overload. The most significant finding is that linguistic modification was the only accommodation that narrowed the gap between English language learner and non-English language learner scores. Similarly, Rivera and Stansfield’s (2004) study of elementary science assessments in Delaware found that there were no statistically significant differences in scores on regular and simplified tests for non-English language learners, thus supporting the theory that linguistic simplification does not affect the comparability of scores.

Other studies, however, show us that linguistic simplification can still not be considered the solution to the issue of fairly measuring English language learners’ content knowledge. After finding that the lower performance of English language learners on NAEP math items was in large part because of the linguistic complexity of test items, Abedi et al. (1997) administered a simplified version and found that whereas students in the lower and intermediate-level math classes benefited from linguistic simplification, those in the highest level did not. This finding brings into question several other issues about the relationship between students’ language ability and math level. Do the students in the higher level math class also have a higher English proficiency level? Have they been in the United States longer? How much schooling, in their home language or in English, have the students had? It is becoming clearer and clearer that these and other issues are relevant to finding appropriate accommodations. If there is still a performance gap between English language learners and non-English language learners in the higher level math classes, how can it be reduced if linguistic simplification is not helpful?

In summary, the linguistic complexity of items on math and science assessments has been found to be a central factor in the overall lower performance by English language learners. Modifying, or simplifying, these items has generally proven not to affect construct validity, but only in some cases has it also been found to narrow the performance gap. Nevertheless, this accommodation is still one of, if not the, most promising in terms of leveling the playing field between English language learners and non-English language learners without giving either group an unfair advantage. In the cases where it is not helpful, we need to continue exploring other options.

The research on dictionaries as a testing accommodation is multifaceted, looking at published dictionaries and customized ones, administered both with extra time and without. Dictionaries are tools that are potentially more useful for learners of intermediate proficiency levels than for beginners or advanced learners (Albus, Thurlow, Liu, & Beilinski, 2005) because low-proficiency students may not be able to access enough of the context to make the dictionary useful or may not have experience using dictionaries, and more advanced students may have the vocabulary but might need more processing time or simplified structure.
In a study by Abedi, Courtney, Mirocha, Leon, and Goldberg (2005), 611 fourth- and eighth-grade students took NAEP-style science tests with either no accommodation, an English dictionary, a bilingual dictionary, or linguistic modification. Findings indicated that non-English language learners were not affected by the accommodations, lending support to their validity. Furthermore, English language learners at both grade levels did perform better under an accommodation. Of all the accommodations implemented in this study, the English dictionary was the most effective for fourth-grade students but less so for eighth graders, for whom linguistic simplification was more effective. Whereas fourth-grade English language learners receiving the bilingual dictionary accommodation performed better than did those without any accommodation, they did not perform as well as did those with the English dictionary. In contrast, eighth graders with the linguistically modified test version scored highest, those with English dictionaries second best, those with no accommodation third, and those with the bilingual dictionary lowest. It therefore seems that many factors, including grade level and proficiency level, determine which accommodation works best.
Similarly, in Albus et al.’s (2005) investigation of the influence of dictionary usage on reading test performance for Hmong English language learners, findings indicated that dictionaries helped English language learners with an intermediate level of English reading proficiency and some dictionary skills, but students with lower and higher language proficiency did not benefit as much (Albus et al., 2005). In addition, students who spent more time with the accommodation did better than did those who used the dictionary but spent less time. In this particular study, because Hmong students’ native language literacy is low, a monolingual English dictionary was appropriate. For other language groups, a bilingual dictionary may be more suitable, but this brings with it some limitations in feasibility, for it soon becomes costly to provide dictionaries for every language group. More research is needed to focus on how students of different proficiency levels and with different background characteristics respond to the various types of accommodations.

Student Background Characteristics and Perceptions of Accommodated Tests
Not only is it important to establish the usefulness of the accommodations themselves, but it is equally essential to understand how individual student characteristics could impact the effectiveness of these accommodations. In a recent study, Kopriva, Emick, Porfirio Hipolito-Delgado, and Cameron (2007) investigated whether individualized accommodation assignments to English language learners improved their scores on a math assessment. The accommodations in this study included different combinations of a picture dictionary, oral reading of test items in English, and a bilingual glossary. The individual characteristics considered included language proficiency (English reading and listening, L1 reading), cultural proximity, and U.S. schooling factors. Kopriva then compared the performance of students who received recommended accommodations given their background with those who received accommodations not recommended given their background, and those who did not receive any accommodations. Findings indicated that students who received the recommended accommodations package performed best. Kopriva et al. also found that “inappropriate assignment of accommodations across ELLs [English language learners], without giving consideration to their ELP-R [English language reading proficiency] or L1-R [first language reading ability], appears to be no more useful than receiving no accommodations at all” (p. 18). It is critical to take these results seriously and to put energy into identifying which accommodations are best for different subgroups of English language learners.

How can we determine if the students themselves feel that the chosen accommodations are actually helpful? Relatively few of the studies on accommodations have used student feedback, but for those that have, valuable additional information about the effect of accommodations has been revealed. Abedi et al. (1997) did a Student Perceptions Study, conducting interviews with 38 eighth graders (native and nonnative English speakers) by asking participants to express a preference during structured interviews for either original test items or linguistically simplified items. A significant number of participants expressed that the simplified items were easier to understand than the original, more complex items. Similarly, Albus et al. (2005) used a posttest questionnaire and learned that although most students did not take full advantage of the provision of a dictionary accommodation, most felt that access to it would be beneficial. Knowing how students view the accommodations can prove extremely useful in determining the way in which the accommodations contribute to the face validity of the modified testing conditions, the overall testing experience, and the accommodations’ general perceived usefulness.

Even in light of the growing body of research on the effect of different accommodations on English language learners’ large-scale content assessment performance, there seems to be a great lack of systematicity in the choice of accommodations for individual learners. Therefore, not only should accommodations be selected based on their validity for the English language learner population as a whole and their effectiveness in narrowing the achievement gap, but they also should account for the differential effect of certain accommodations on different subgroups of English language learners. Future research on accommodations should specifically focus on differential effects across English language learners of different language proficiency levels, and it should make more use of qualitative studies to investigate how test-takers process the accommodations.


Abedi, J., Courtney, M., Mirocha, J., Leon, S., & Goldberg, J. (2005). Language accommodations for English Language Learners in large-scale assessments: Bilingual dictionaries and linguistic modification (CSE Tech. Rep. No. 666). Los Angeles: University of California, Center for the Study of Evaluation.

Abedi, J., & Gándara, P. (2006). Performance of English Language Learners as a subgroup in large-scale assessment: Interaction of research and policy. Educational Measurement: Issues and Practice, 25(4), 36-46.

Abedi, J., Hofstetter, C. H., & Lord, C. (2004). Assessment accommodations for English language learners : Implications for policy-based empirical research. Review of Educational Research, 74(1), 1-28.

Abedi, J., Lord, C., Hofstetter, C., & Baker, E. (2000). Impact of accommodation strategies on English language learners’ test performance. Educational Measurement: Issues and Practice, 19(3), 16-26.

Abedi, J., Lord, C., & Plummer, J. (1997). Language background as a variable in NAEP mathematics performance (CSE Tech. Rep. No. 429). Los Angeles: University of California, Center for the Study of Evaluation. Retrieved December 1, 2007, from

Albus, D., Thurlow, M., Liu, K., & Bielinski, J. (2005). Reading test performance of English-language learners using an English dictionary. The Journal of Educational Research, 98(4), 245-254.

Butler, F. A., & Stevens, R. (1997). Accommodation strategies for English Language Learners on large-scale assessments: Student characteristics and other considerations (CSE Tech. Rep. No. 448). Los Angeles: University of California, Center for the Study of Evaluation.

Kopriva, R., Emick, J., Porfirio Hipolito-Delgado, D., & Cameron, C. (2007). Do proper accommodation assignments make a difference? Examining the impact of improved decision making on scores for English language learners. Educational Measurement: Issues and Practice, 26(3), 11–20.

Rivera, C., & Stansfield, C. (2004). The effect of linguistic simplification of science test items on score comparability. Educational Assessment, 9(3&4), 79-105.

Research on Standardized Language Testing and Young English Language Learners

Kimberly Woo, New York University,

Considering the age and development of elementary-age English language learners, this article reviews the literature related to standardized language testing and young language learners, and presents a case for increased research to inform the development of more valid standardized language tests for this population.

In the current culture of accountability, children are being tested at younger and younger ages. In the United States, in response to the accountability demands of No Child Left Behind (2001), many states are formally assessing students beginning in the second grade. With testing becoming increasingly common at such a young age, a number of issues, particularly the impact of such tests and testing on young English language learners and the validity of the tests for this population, become important to consider (Francis & Rivera, 2007).

As part of the research on test validation, the field of language testing has long been concerned with the role of test-takers’ characteristics in their performance on language assessments. This work thus far has focused primarily on the effects of test-takers’ gender and cultural and educational backgrounds (Elder, Iwashita, & McNamara, 2002; Lumley, & O’Sullivan, 2005; O’Loughlin, 2002; Sasaki, 2000). However, this literature has generally overlooked age as a factor that might impact students’ performance on tests.

Much of the existing validity research in language testing has focused on assessments for older students and adults. However, what is appropriate or true of older or adult language learners is not necessarily so for those who are younger. Older language learners are in a much different maturational stage than are school-age students who are still in the process of growing and developing. Students’ development is of particular relevance when one considers language assessment at the elementary school level.

Elementary-age language learners are among a category of students called “young language learners,” defined by McKay (2006) as children “learning a foreign or second language and who are doing so during the first six or seven years of formal schooling . . . between the ages of approximately five and twelve” (p. 1). A feature that distinguishes young language learners from their postpubescent counterparts and that identifies them as a unique population worthy of study is that these students are learning the second or additional language while still developing their first language.

In addition, children between 5 and 13 undergo considerable cognitive, social, psychological, emotional, and physical growth and change, meaning that children at different points in this process have (or have not yet acquired) certain developmental characteristics that may influence their performance on standardized language tests. McKay (2006) noted that in this age range, children are developing their attention span, learning to sit still for longer periods of time, learning to read and write, developing social skills, learning to negotiate different peer groups, and learning to think and speak about language. It is also noted that this is an age when children and their self-image are particularly vulnerable to negative feedback or stressful situations.

Understanding what young English language learners need from and bring to language assessment and how test developers currently conceptualize assessment for this population can lead to improved measures, fairer procedures, and better use of the assessments.

Existing Research

In the language assessment literature, investigations into language testing of young language learners have tended to be subsumed under broader investigations of school-age (K-12) assessment and have rarely dealt specifically with language test development or validity. The discussion has focused primarily on three main areas: policy and standards, classroom assessment, and standardized testing.

Policy and Standards

A considerable amount of research on school-age language learners in the United States has focused on the policies of assessing English language learners with a particular emphasis on three areas: standards, large-scale assessments of content knowledge (e.g., math and science), and academic language proficiency (McKay, 2005). Discussion of these areas frequently overlaps as researchers evaluate language testing policy and investigate the apparent conflict between standards, standardized tests, and alternative assessment.

Much has been made of issues of fairness and validity in the assessment of school-age English language learners, including the usefulness of accommodations, the ethics of policies relating to testing inclusion and exclusion, and the alignment between the tests and the standards (Bailey & Butler, 2004; Butler & Stevens, 2001; McKay, 2000). While these discussions of validity and fairness operate at a level far above the interaction between test and child, they do hold relevance, as exemplified by McKay’s (2000) emphasis that standards should reflect the idea that students at different ages are developmentally different and the language development and language demands are different for each age range. Ideally, acknowledgment of this notion at a standards level would be reflected in changes to the standardized tests.

Classroom Assessment

Language assessment research specifically investigating young language learners has focused primarily on classroom assessment, frequently in a foreign language context. This work has spanned a large range of topics and often attends more to the role of the teacher than that of the students, with topics including teachers’ interpretations of ESL standards, their use of and attitudes toward different assessments, the kinds of judgments they make as assessors, their reliability in making judgments, their professional knowledge regarding assessment, and their use of assessment results (Edelenbos & Kubanek-German, 2004; Llosa, 2005; Rea-Dickins, 2004, Rea-Dickins & Gardner, 2000; Teasdale & Leung, 2000). It is important to note, however, that this work has typically had to do with nontest alternative assessments such as portfolios and classroom observations rather than large-scale standardized language tests.

Standardized Testing

As previously noted, the literature on language test validation consists primarily of research relating to tests for older language learners and adults. This is particularly true of the literature on standardized language assessment. Though it may be assumed that the publishers of commercially produced standardized language tests for younger students have studied the reliability and validity of their products, this research is not typically made available to the public. Of the many commercial standardized language tests for young language learners on the market, the Cambridge Young Learners English (YLE) tests, a set of three large-scale, low-stakes English proficiency tests for students aged 7 to 13, are the most common, if not only, tests of this type to be analyzed and discussed in the literature (Bailey, 2005; McKay, 2006; Taylor & Saville, 2002).

Aspects of the YLE tests have been highlighted as examples of how the needs of young language learners should be addressed in the development of a standardized language test. These aspects include the use of “game-like” items authentic to children’s real-world language use and emphasis on success through awards systems and predictable structure, addressing children’s vulnerabilities and increasing motivation (McKay, 2006). The developers’ success in addressing the needs of young language learners is corroborated in a test review by Bailey (2005), deeming the YLE tests as developmentally appropriate, pointing particularly to the illustrations and differentiation among levels of the test and stating,

The developers of the YLE tests take very seriously threats to validity that come from the inappropriate testing of children . . . overall, these design features—coupled with examiner training to deliver the tests to children in an enjoyable and non-threatening way—may all contribute to the lessening (if not realistically removing) children’s anxiety and increasing the motivation of the young test taker. (p. 248)

Considerations in Developing Tests for Young Language Learners

Although development research on specific tests is lacking, researchers have begun to make note of aspects deemed important in the development of tests for young language learners (Rea-Dickins, 2000a; Rea-Dickins, 2000b). McKay (2006) pointed to the vulnerabilities of young children and suggested that children not be asked about grammar or abstract notions beyond their cognitive readiness. She proposed that children be tested by a familiar adult and that the test and test administration take into account children’s physical needs, such as the need to move (McKay, 2006). Hasselgreen (2005) provided a summary of suggestions from the young language learner literature, including having “elements of game and fun,” designing tasks to emphasize student strengths, providing support for the student, and having the assessment tasks be “good learning activities in themselves” (pp. 338-339). These considerations present a starting point for the development of more valid tests for young children, but research has yet to describe how to put such suggestions into practice.


The notion of focusing on young language learners is not a new one. In a paper presented at the April 1967 TESOL Convention, John Upshur of the University of Michigan discussed the development of standardized foreign language testing for young children, emphasizing the need for tests that are cognitively and socially appropriate to the child’s development. As he stated, “Before going very far in the development of tests for children, one has to know what kind of language tasks they find enjoyable, tolerable, or even only possible, and how frequently, and for how long they will play” (1967, p. 33). He concluded with the suggestion that there be communication between test writers and teachers, who “already know what games children will play and can win” (p. 34). This sentiment is echoed nearly 40 years later by McKay. “Researchers require specialist knowledge of young learners to investigate assessment issues. Such knowledge includes, for example, the characteristics of young learners and the characteristics of tasks (the setting, the input, the nature of the expected response) likely to affect performance in the assessment procedures” (McKay, 2005, p. 256).

In the current educational climate, particularly in the United States where high-stakes, large-scale tests are the main measure of accountability and in which children are increasingly assessed from an early age, there is a very real need to ensure that the language tests in use are valid for young language learners. Increased empirical research is needed with regard to how young language learners’ maturational development influences their performance on these measures. It is through knowledge of these effects that tests can be developed or adjusted to ensure fair and valid assessment. In addition, work is needed in evaluating the developmental appropriateness (and consequently, the validity) of current commercial tests and test items employed by school districts to test young language learners. To do so, researchers may need to solicit the input of teachers and become more aware of this particular population.


Bailey, A. (2005). Cambridge Young Learners English (YLE) Tests. Language Testing 22, 242-252.

Bailey, A., & Butler, F. (2004). Ethical considerations in the assessment of the language and content knowledge of U.S. school-age English language learners. Language Assessment Quarterly 1(2&3), 177-193.

Butler, F., & Stevens, R. (2001). Standardized assessment of the content knowledge of English language learners K-12: Current trends and old dilemmas. Language Testing, 18(4), 409-427.

Edelenbos, P., & Kubanek-German, A. (2004). Teacher assessment: The concept of ‘diagnostic competence.’ Language Testing, 21(3), 259-283.

Elder, C., Iwashita, N., & McNamara, T. (2002). Estimating the difficulty of oral proficiency tasks: what does the test-taker have to offer? Language Testing, 19(4), 347-368.

Francis, D., & Rivera, M. (2007). Principles underlying English proficiency tests and academic accountability for ELLs. In J. Abedi (Ed.), English language proficiency assessment in the nation: Current status and future practice (pp.13-31).Davis, CA: The Regents of the University of California.

Hasselgreen, A. (2005). Assessing the language of young learners. Language Testing, 22(3), 337-354.

Llosa, L. (2005). Assessing English learners’ language proficiency: A qualitative investigation of teachers’ interpretations of the California ELD standards. The CATESOL Journal, 17(1), 7-18.

Lumley, T., & O’Sullivan, B. (2005). The effect of test-taker gender, audience, and topic on task performance in tape-mediated assessment of speaking. Language Assessment, 22, 415-437.

McKay, P. (2000). On ESL standards for school-age learners. Language Testing, 17(2), 185-214.

McKay, P. (2005). Research into the assessment of school-age language learners. Annual Review of Applied Linguistics, 25, 243-263.

McKay, P. (2006). Assessing young language learners. New York: Cambridge University Press.

O’Loughlin, K. (2002). The impact of gender in oral proficiency testing. Language Testing, 19(2), 169-192.

Rea-Dickins, P. (2000a). Assessment in early years language learning contexts. Language Testing, 17(2), 115-122.

Rea-Dickins, P. (2000b). Current research and professional practice: Reports of work in progress in the assessment of young language learners. Language Testing, 17(2), 245-249.

Rea-Dickins, P. (2004). Understanding teachers as agents of assessment. Language Testing, 21(3), 249-258.

Rea-Dickins, P., & Gardner, S. (2000). Snares and silver bullets: Disentangling the construct of formative assessment. Language Testing, 17(2), 215-243.

Sasaki, M. (2000). Effects of cultural schemata on students’ test-taking processes for cloze tests: a multiple data source approach. Language Testing, 17(1), 85-114.

Taylor, L., & Saville, N. (2002). Developing English language tests for young learners. In University of Cambridge Local Examinations Syndicate. Research Notes, 7, 2-5. Cambridge, England: Cambridge University Press.

Teasdale, A. & Leung, C. (2000). Teacher Assessment and Psychometric Theory: A Case of Paradigm Crossing? Language Testing, 17(2), 163-184.

Upshur, J. (1967). Testing foreign-language function in children. TESOL Quarterly, 4, 31-34.