Analyzing the Efficiency of E-Assessment of EAP Courses amid COVID-19 in Bangladesh

In industrialized countries, conducting online classes has been a common phenomenon. However, it became quite challenging to arrange virtual classes for a developing country like Bangladesh during the COVID-19pandemic. Teachers at all levels had to cope with the situation to make a drastic shift from offline to online classes in a short period of time. Although teachers could successfully take virtual classes, concerns about the efficacy of e-assessment remained since assessments must depict the actual scenario of students' learning. This mixed-method study tried to figure out the efficiency of e-assessment of EAP (English for Academic Purposes) courses. To facilitate, 30 students and 4 teachers of privately-run universities in Dhaka virtually took part in this research. The findings reveal that the students could find validity and reliability in the e-assessment, but technical glitches made the e-assessments impractical. On the contrary, teacher participants could only ensure the presence of validity. At the same time, they could apply the concept of zone of proximal development by Vygotsky and comprehensible input hypothesis by Krashen.


INTRODUCTION
In the 20th century, classroom assessment was thought to be a process that gave an index of students' learning by evaluating their understanding of the contents and judging their performance depending on what teachers had taught (Sangle et al., 2020). Assessment was regarded an important aspect in tertiary education to determine students' language abilities, knowledge, and a tool to determine students' progress in meeting the course's objectives (Stödberg, 2012). Prior to the COVID-19 pandemic, tertiary level evaluations could be performed both online and in person. Assessments comprised pen-paper midterms, finals, presentations, quizzes, group/pair assignments, and project-based tasks, among others. The Based on current literature review, it was evident that there were few studies that addressed the efficacy of e-assessment in Bangladesh. In one of their recent studies, Huda et al. (2020) shed light on this topic by conducting an inter-university study where randomly chosen students participated. Nevertheless, the current paper investigated the perspectives of the prominent stakeholders (both students and teachers) who were directly involved in teaching-learning process. Hence, this study focused on the following research questions: 1. What is the perception of the students of EAP courses regarding the effectiveness of eassessment? 2. What is the perception of the teachers of EAP courses regarding the effectiveness of eassessment?
The findings of the research would give academics with useful insights and incentive to perform further studies in the same subject, taking into account different educational levels. Based on the findings, practical precautions for a better learning experience might be taken. Not only that, but also the results of this study would help determine whether or not e-assessment should be continued after the pandemic.

Research Design
Since the study's goal was to figure out the perspectives of students and teachers, a qualitative method was deemed ideal. The teacher participants cooperated and shared their experiences by responding to open-ended questions during semi-structured interviews. By examining the experiences shared by the teacher participants, the researchers could compare and assess novel circumstances (Patton, 2005). In contrast, the researchers had to struggle as student participants were reluctant to answer the open-ended questions virtually. Therefore, quantitative approach had to be followed by designing close-ended questions with numbers (on a scale of 1-4) for determining students' perceptions, and then measure the percentages and mean score as investigation into a social or human problem could be done by using variables, measuring with numbers, and evaluating with statistical techniques as per Creswell (2014). Hence, the combination of qualitative and quantitative research made the study a mixed-method one.

Sampling
A total of thirty students and four teachers from four randomly chosen private universities participated in this study. To make the study concise and focused, the researchers chose those students and teachers who had completed at least one EAP course online so that they could properly share their experiences. The study comprised 23 male and 7 female students aged between 20-23 years, all of them being first-year students. Three male teachers and one female teacher consented to participate in the virtual interview session. A detailed profile of the teacher participants is mentioned in the following table:

Data Collection Instruments
The researchers chose survey questionnaires (both open-ended and closed-ended) and semistructured interviews as instruments for this study.
To determine the answer to the first research question, the student participants were given a questionnaire with ten closed-ended questions. The student participants selected answers from the given options in a Google Form from the closed-ended questions. Further, four generic questions were also included in the survey questionnaire to obtain information about the participants' profiles. One prominent advantage was, participants got no chance to skip questions as it was mandatory to answer all the questions. The semi-structured interview with the teacher participants was conducted using six open-ended questions that allowed the participants to explain their experiences and opinions. This provided the responses to the second research question.

Data Collection Procedure
The data collection process was held virtually due to the pandemic. After selecting the teacher participants who were willing to take part in the survey, appointments were fixed for their interviews. Subsequently, the interview questions, a Google/Zoom Meet link, and the interview schedules (based on the interviewees' preferences) were mailed to them. They were assured that their identities would be kept private and that all information would only be used for research purposes. The interview sessions were held on the specified days, and the sessions were recorded with the participants' permission. However, the interview session for one teacher participant took place via WhatsApp. The researchers took notes on the responses because recording was not permitted due to the administrative regulations of the participant's institution. In addition to this, for maintaining authenticity and individuality of the student participants' responses, the researchers took assistance from their course teachers. The researchers mailed the course teachers a link of the Google Form. The URL was shared with the students later during class time, and responses were collected right away.

Data Analysis Procedure
The data analysis process was broken down into numerous parts. Google Form automatically inserted students' responses into a Microsoft Excel Sheet. Then, the mean score and response percentages were manually determined using the Likert scale to examine the responses to the closed-ended questions. The data was also presented in a descriptive manner for clarification. The researchers attempted to identify themes from the teacher participants' responses after verbatim transcription of the responses for the open-ended questions.

Findings Student Participants' Responses to the Closed-ended Questionnaire
The student participants, through a Google Form, responded to ten statements. The statements precisely followed the Likert Scale, with four choices given to the participants: strongly agree, agree, disagree, and strongly disagree. The option 'neutral' was purposefully excluded to derive a better study outcome; hence the choice of options was forced. The responses were assessed once again by calculating the mean score. As a result, the following mathematical figures were considered: Strongly Agree=4 Agree=3 Disagree=2 Strongly Disagree=1 In addition, a mathematical representation scale based on the mean score was created to show the survey questionnaire findings: 3.51-4.00 = very positive perception 2.51-3.50 = positive perception 1.51-2.50 = negative perception 1.00-1.50 = very negative perception In the table below, the details of the statements and the responses are provided:

Teacher Participants' Responses to the Open-ended Questions (questions are given in Appendix A)
Responses to Question no. 1 T1 and T2 mentioned that since the feedbacks were given in Google Docs, students could pay close attention to the comments while checking their papers afterwards. This practice enabled them to work more on the feedback compared to F2F situation. In terms of exchanging peer-feedback, the teacher had to play a crucial role. Teachers might urge students to exchange feedback. The situation of exchanging peer-feedback was quite similar to the offline classes as students had the same enthusiasm while checking their peers' copies. Therefore, T2 considered e-assessment as a benefit when it came to exchanging feedback. Similarly, T3 and T4 voiced that comprehensive individual and group feedback could be given to students through Microsoft Team, Zoom breakout rooms, and Class Note Book. Even the students felt comfortable in exchanging peer-feedback compared to F2F classes. T3 added that this practice of using technology could continue even after the pandemic as it made the process easier and clear.

Responses to Question no. 2
Due to this distant learning situation, certain limitations cropped up while bringing variation in assignments/tasks for e-assessment. T2 and T3 narrated that not all students were competent in utilizing technology. The teachers might integrate various apps to help students become more adept; however, due to students' lack of knowledge, it was not always possible to bring variation. T1 and T4 also mentioned that fewer options were available in terms of assessing students' potential through various tasks/activities. Previously, students would sit for on campus exams where chances of copying or plagiarizing were much less. However, in online exams, students could not be given closed-ended questions in order to prevent cheating.

Responses to Question no. 3
Four of the participants found no difference in measuring content knowledge of the students virtually. T2 added that the techniques might have changed but the same knowledge was being tested online. For instance, reading comprehension capacity and translating comprehension into noun phrases or choosing a specific choice were two key ways of measuring undergraduate students' reading skills in English reading classes. To prevent students from fast copying from each other behind the camera, the choosing option was omitted intentionally. Students now converted their comprehension into small noun phrases instead of matching headlines, as measured by the same skills. Instead of true/false responses, students now did flowcharts.

Responses to Question no. 4
While answering this question, four of the participants had similarity in their responses. T2 elaborated that fine tuning of the tasks was done based on the level of proficiency of the students. To illustrate, if students' competency differed, teachers increased the level of difficulty for the competent students to make those tasks enjoyable and challenging at the same time. In one of the EAP courses, T2 had increased the number of citations and word limit for the someone else. It also needs to be explored how students' learning experiences can be improved by introducing innovative changes to the assignments and activities. The current research might lead to new prospects for creating e-assessment, as well as the need for a blended approach, in which e-assessment is combined with traditional evaluation based on students' demands and needs.
Although the sample size was limited, this study elicits teachers' and students' perceptions on the efficacy of e-assessment. For a deeper understanding, further research with more teacher and student participants should be conducted to understand the innovative adjustments used by teachers to maintain the validity, reliability, and practicality of eassessments. During their discussion of their overall experience, teacher participants mentioned a few hurdles that may render the e-assessments ineffective. Readers can look for practical solution to these problems. The findings may encourage teachers and students to acquire appropriate training to continue using e-assessments.