EdTech Week 10: Sampling Discussion

Leave a comment

The professor gave us the following for reflection:

Whether you are looking to choose students from a class (or from employees in a workplace) for a special project or you are looking for a pool of people to take a survey, the first thing you will need to do is decide on how you will choose your sample, or group of students or employees.

So, based on this week’s readings (The ABC’s of Evaluation, Chapter 8),

  • What are some ways you could choose that sample?

The book identifies two categories for gathering sample groups: probability sampling and non-probability sampling.  Once I determined what I wanted to evaluate, the process that I used most resembled a non-probability sampling and more specifically a purposive sample.  For me it was a practical and efficient decision.  Since I already knew the framework of the program that I want to evaluate, I decided to chose my own group of students and evaluate the program with them.  I could have sought another teacher to run the program with his group of students, but this would have been a major time imposition for both of us, considering the training and the information exchange that would be involved.  I have a former colleague who is running a version of my same program with his group of students, but we are so far away geographically that the observations and information exchange would be limited.  Nonetheless, the sample student population would not be random in either of those cases, unless you determine that the administrative scheduling was done in a random sort of way.

  • What experiences have you had in choosing samples?

I have never been in a situation where I was able to pool a sample group from a greater population.  In my limited sampling of populations, I have always had smaller manageable groups so, I could survey the whole group.  In the program that I am evaluating, I had to make decisions about groups and student leaders, but my decisions were based more on track record than anything else.  The program that is being evaluated is based on student hierarchy, which creates a social dynamic and the students are usually not accustomed to it.  However,  I have set up the survey in such a way that I can measure attitudes, opinion, and impact based on student roles. 

  • What are some things to watch out for and/or avoid in selecting samples?

When I think of my own evaluation project, I have to laugh a little when I read this question.  Because I decided to evaluate a program in which I am the facilitator, I had to be very selective of my group.  I work with students at a very volatile age of 14-15 years old.  I was with the same group of students last year.  All of them have matured, but some more than others.  Also due to culture and family issues in some students and the administration, there are variables that I cannot control, making it difficult to initiate some educational endeavors among several students.  There was really only one group that I could depend on in terms of maturity and responsibility, so it was a no-brainer when it came to selecting that group. 

In the case that a sample needs to be random, there are so many uncertainties among the people that are selected.  If it is crucial that everyone in the sample needs to give feedback then I don’t know if the sample can truly be random.  There are always people who are not able to participate or do not respond in a timely manner, which makes it difficult for evaluators to depend on them.  In general education, a sample of a particular class has a good probability of having a mix of skills and academic level, but as education levels become more specific, you will likely have a group of students with similar goals and interests.  Regardless, most educational sample groups are preselected groups that neither the teacher or the evaluator has selected.

Advertisements

EdTech 523: Module 4 Reflection

Leave a comment

Overlooking the Old City of Jerusalem

Overlooking the Old City of Jerusalem

Module 4 was a transition period because I was ending one group project while I was getting started with another, and in between we had Spring Break, when my wife and I traveled a few days to Jordan and Israel. I agreed to work with one group to develop an online resource for online teachers and work with some other students to develop some discussion questions for the upcoming module.  This gave me collaboration opportunities where we met in Google Hangouts, shared a Google Doc, and exchanged friendly emails. I’m glad that I did both as I was able to use some of the work from the online resource as a reference in my communication plan.

Doing homework at the Dead Sea.

Doing homework at the Dead Sea.

Additionally, I was able to work ahead  by reading the material for the next module so I could prepare the discussion questions I will present.  Lastly, I would like to comment about chapter 9  on “Transformative Learning” in Building Online Learning Communities. This chapter was very inspirational for me because it described so much of what learning online has meant for me.  It also aligns with my philosophy on learning and teaching.  This chapter meant so much to me, since it is affirming my desire to grow confident learners through online education.

The other course textbook, Learning in Real Time, helped me to envision the role of synchronous communication in online learning.  As I was reading through this text I had to think of discussion questions, but my mind was really opened to the power of synchronous communication for building an online community.

Self Evaluation Using My Grading Scale

It seems natural for me to transition my skills as a teacher to the online environment.  I enjoyed putting together my grading scale for online discussions. My experience as a teacher has helped me know how to clarify expectations and also prevent problems with students before they happen.  Of course, I imagine students that range from a typical pre-teen to a solid full-fledged teen, which are the age groups that I have been working with the last few years.  Also, my grading has been influenced a little bit by the IB Curriculum, which is my current grading standard.

It is also a little unfair evaluating myself with my own grading scale, and this is based on two factors.  First, I made the grading scale based on general ideas that I have used when I respond to a discussion prompt.  This will likely work in my favor because I know what I like in a response, because it is often what I do.  However, the second factor does not work in my favor so much.  I am probably my own worst critic, so using my grading scale with my perception would probably cause me to nit-pick details in my response. When I consider my experience, while using my own scale to evaluate myself, I would not make any changes to my scale.

Nonetheless, I think of my last post which responded to one of the students who posted a discussion question.  I know that I didn’t do all the tasks that were associated with the research of his writing prompt, so I would probably loose about 3 points there.  I make up some ground in the area of content for posting some relevant information. I really wanted to discuss Chapter 9, which was one of the required readings and no one made a prompt that addressed this chapter, so I took the opportunity to steer the discussion in this direction, but at the same time I did address many things in the response.

There were many opportunities to respond to other students’ posts and I know I met the minimum requirement, and my posts are generally very thoughtful, so I received all 10 points.  Finally, I am a language teacher so I have developed many skills for using language in communication.  I make occasional mistakes with my writing, but I usually make a point to review, and I pride myself on my creative approach to writing, especially the introductions.  I know I took care of these details in my response, so I received 5 points for each scale.  Oops! I did not include any picture or media to accompany it, but at least I made up for it in this post.  This brings my total score to 32 of 35 points in that post.

Changes to Discussion Facilitation

Even though I have not facilitated a full scale discussion yet, I can already imagine some of the challenges associated with it.  I already know what it is like to feel overwhelmed with reviewing many writing assignments, so I could imagine the work load easily getting out of hand if students are constantly posting lengthy responses.  I would have to get to know certain features of the LMS that allow me to review overall activity.  Even though I want my students to write with quality and to feel like they are writing with a purpose, I know half of my job is complete just by getting them to do that.  In other words, I won’t feel that it is necessary to read every word, especially for the student responses.  I would have to learn some teacher shortcuts for reviewing these, as well as encourage more peer review and accountability among the students.

EdTech 505 Week 8: Request For Proposal

Leave a comment

505 Week 8 RFP

This week we had to prepare a proposal for a fictional company that is interested in pursuing a marketing campaign for their educational program development package.  The attached document is my proposal.

EdTech 505 Week 5: Gap Analysis

Leave a comment

Peer Structure and Support

Brief Overview:  This is a classroom instructional management design that requires students to create peer assessments of literary content and analyze peer responses.  The purpose of this evaluation is to measure the effective use of Google Web 2.0 tools while writing the peer assessment project.

Needs Analysis

As I have indicated previously, this is a program that I have successfully implemented in another school, but I had not yet applied the use of web tools, therefore the peer assessments were created by students and printed out for classroom distribution.  Additionally, when one student failed to meet the deadline, it affected the whole class.  In my current school, creating these assessments in web-based format, we can save the consumption of paper and the web-based collaboration feature makes the whole team accountable to the deadline and they don’t have to rely on one student.

The Goal

The objective is to make the peer assessment process more efficient by using web-based tools.  An additional objective with this current group of students is to measure the effectiveness of peer assessment for developing analytical literacy skills.  At the end of the evaluation, a recommendation can be made to continue with this program for future literacy activities.

The Program (Bridge)

All the students will read the same selected text.  The facilitator will distribute the assessment tasks to student directors, who will meet together to discuss those tasks as they relates to the deadline.  The student directors will meet with their team of students to delegate responsibility among the members.  Each team will work together to form assessment artifacts that target the objective and they will determine what are the appropriate responses to meet those objectives.

Students will be provided time with a computer to create a collaborative document, questionnaire, spreadsheet, and presentation, in addition, time for taking the peer assessments of other students and to analyze the results of their own assessment.


Peer Structure and Support

Philosophy and Goal

Through the process of assessing peer skills and knowledge, the students become more aware of their own ability to interpret literature and analyze peer responses.

Needs Assessment

In order to develop critical thinking skills for the students, the educational experience needs to be relevant for the learner.

The program facilitator needs to provide

  • rich literature for the assessment tasks
  • examples of assessment tasks
  • feedback on assessment artifacts

Students have a need to make a learning experience more relevant by

  • analyzing text for peer assessment tasks
  • analyzing peer responses of assessment tasks

Program Planning

  1. The students will take a pre-survey about assessment tasks.
  2. The whole group of students will read the selected text.
  3. Student groups are formed with a director, who discusses peer assessment tasks and coordinates the collaboration of the team.
  4. Each team will create an online assessment that targets the group’s assessment tasks.

Implementation and Formative Evaluation

During this phase the facilitator will review the assessments created by each group to see if they properly understood the assessment tasks and to clarify any mis-guided assessment artifacts.  Once the peer assessments are ready for distribution, the whole class will respond to the quizzes created by their peers.

Summative Evaluation

After the students have responded to the peer assessments, each team will collect and analyze the data.  They will put together an expository presentation that shows the anonymous responses from the class.  They will identify positive and negative response characteristics to their intended assessment tasks.

EdTech 505 Week 5: Evaluation in Program and Planning Cycles

Leave a comment

ADDIE Model

ADDIE

I remember learning about ADDIE in the EdTech 503, Instructional Design, and the evaluation component was easily understood within the whole ADDIE cycle.  As an educator, my mind is already trained to see evaluation as a component of instruction.  Now that I’m taking EdTech 505, Evaluation, that component has become harder to grasp.  I feel like I have been trying to cut out the piece of the pie called “Evaluate” to see if it tastes different from the rest of the pie.  In other words, even though the pie does have separate pieces, it is all made from the same ingredients; one piece cannot be completely independent from the others.

The ABCs of Evaluation, p.51

Evaluation: Program Cycle

This model does not stray much from the ADDIE model, but you can make the distinction with the purpose of the model.  ADDIE relates more specifically to instructional design, whereas the Program cycle on the right can relate to instruction or any active part of a system or organization, whether it relates to instruction or not.  This model does account for both formative evaluation and summative evaluation, which the ADDIE models does not distinguish.  Also, this model suggest that implementation strategies can change according to the formative evaluation during one rotation of the cycle.

The Planning-Evaluation Cycle

The Planning-Evaluation Cycle

This model does not fit as easily into an educational or instructional situation.  Even though the components of ADDIE and the Program Cycle appear in this model, it is distributed quite differently from the other two models.  For example, this cycle includes analysis and design as part of the evaluation phase.  Nor, does this model clearly distinguish between formative and summative evaluation, it almost suggest that the whole evaluation process is formative.  It appears that this model would be good to analyze some function or feature of an established system, and based on the results in the evaluation phase, the ADDIE model could be applied as an instructional model within the planning phase, which would address the needs that were discovered during the evaluation phases.

References:

The Reflection Process

Leave a comment

I am nearing the end of my EDTECH 542 class, Project Based Learning.  I have collaborated extensively with a classmate to develop a really powerful project that can be implemented across cultures.  We have tailored it to work with our students.  Now that we are near the end, we are asked to consider the reflection process.  This is a component that we need to plan to add to our own project, therefore, it is beneficial for us to consider the structure of a reflection activity.  Dr. Baek has asked us to respond to the following questions.

  • Who will you involve in the process?

In this project, both collaborating teachers will want to spend some time reflecting on the successes and challenges involved in implementing the project among students.  Of course, we will also want to involve students.  We have planned both a peer assessment reflection for the teams and a personal reflection, which will give students a chance to review all that they have accomplished.  Because this project is reaching beyond the classroom and the school, the administrators will be informed of the activity of the project, and would also benefit from a reflection.  The parents will likely have to consent to the student’s involvement in the process, therefore, a final reflection from the parents would be suitable.  There is a heavy emphasis on technology, so it is likely that there will be some fallout at times, so debriefing with the technology or IT department is crucial.

  • What will your process look like?

In this particular project, the students will be involved in peer assessment and self evaluation through an online delivery, such as a survey.  For the students in one class, there could be a discussion or a written response, especially if we are able to show anonymous responses from students from another culture.  For reflection with administrators and IT, the reflection can take place in meetings.  For parents, the best option is also asking them to complete a survey online.  Questions will focus on the experience, the effectiveness of the activities, the challenge of cross-cultural collaboration, and the reflection on the learning that they take from the project.  Since this will be the first time for most student, a comparative reflection on the project based learning process versus the traditional learning methods.

  • Is it just a one-time assessment?

For this project, the reflective assessment will be one time.  There is a hope that the success of this project can propel similar projects in the future, perhaps in the same school year.  The drive to complete content in the school year, does not give us sufficient time to add additional reflections into the project.

I am reminded of a time that I was able to use a reflection process effectively with students who completed a project based task that required them to involve the whole class in the process.  There were control issues when other students were measuring the responses of the class.  The student leaders were allowed time to reflect on the problems that the teacher noted, and they were give a chance to offer solutions.  The next time we did this type of project, we were able to implement some of the solutions that the students offered for class control.  It did not put an end to the challenges, but since the students were more prepared, it helped make the process more effective.

 

PBL Reflection: Assessments and Rubrics

Leave a comment

Video Introduction

Name:_________________________________

4. Above Standard 3. Standard 2. Approaching Standard 1. Below Standard
Video Content:
Information required for the video introduction.
All information was clear and relevant and followed the instructions. Responses were thoughtful and well planned All the information requested was included in the video. Important information was included but the video was missing specific information in the instructions A video was recorded but the student clearly did not follow the instructions.
Presentation Quality:
The quality of the narration of the recording.
Narration was clear.
Narrator varied voice and volume for interest. When appropriate,
narrator spoke naturally rather than reading it word for word.
Narration was clear and interesting, but did not have a natural flow Narration was either too loud or too soft. It seemed monotone and sounded like a boring presentation. Project included no narration.
Technical:
Following the instructions for sharing and posting the video.
The video was labeled correctly according to the instructions and the link was posted before the deadline. The video link was posted within the deadline. The video link was posted late to the appropriate document. The student either did not complete the assignment or was not able to post the link to the document.
Written Response:
After watching your partner’s video introduction, a response is written below the link.
An appropriate
length response was written courteously. The font color of the response
was changed according to the instruction.
An appropriate length response was written and was courteous. The response was either too short or too long. The response was not appropriate or the student did not write a response

MyT4L Rubric

www.tech4learning.com

This week we were asked to consider the assessments of a project and to make a rubric that reflects the learning goals and expectations through the activity.  The table above show the thought that went into the assessment of this particular assessment called “Video Introduction”.

Rubrics provide essential guidelines for reaching a particular standards.  As a student, I have been able to guide my own progress and completion of a task by checking the grading expectations.  As a teacher, I have also implemented the use of simplistic rubrics.  I teach with the International Baccalaureate (IB) system, specifically in the Middle Years Program (MYP).  This curriculum provides a rubric as a guideline for assessment.  The rubric is a little meaty for my students so I trim it down to a more suitable consumption.  I created the user friendly rubric so students could use it for their own measurement.  In one instance, the students were able to use the rubric to make judgements of their peers.  On another occasion I had a fun write for my advanced students; I asked them to write about a topic by specifically targeting a rating on the rubric, then I had to guess which rating they were targeting.  Some of the students intentionally lowered their level to see if I could guess which lower rating they were trying to reach.

I like the idea of allowing students to create their own rubric as long as they are aware of the standards that they need to reach.  It would be nice to see how well they can word the expectation.  Unfortunately for the most part, the students that I have worked with lack the maturity and independence to take on such a task.

For the purpose of this rubric, the main content that will be assessed is checking how well they use their reading skills for the instructions and how well they use their language skills to respond to the prompts.  I based the expectation categories as the example rubrics that were on the Buck Institute for Education, therefore, I chose the indicators Above Standard, Standard, Approaching Standard, and Below Standard rather than to assign point values.  The MYP rubric is based on a point value of eight, so this system will translate better to the MYP rubric.

This assignment meets the following AECT standards

  • 1.1.5.a Utilize a variety of assessment measures to determine the adequacy of learning and instruction.
  • 1.1.5.b Demonstrate the use of formative and summative evaluation within practice and contextualized field experiences.
  • 1.1.5.c Demonstrate congruency among goals/objectives, instructional strategies, and assessment measures.
  • 1.3.a Select instructional strategies appropriate for a variety of learner characteristics and learning situations.
  • 2.1.1 Develop instructional and professional products using a variety of technological tools to produce text for communicating information.
  • 5.3.2* Develop and implement a school media program evaluation process.
  • 5.3.3* Use a variety of summative and formative assessment techniques for the evaluation of the school media center and for the school media program.

Older Entries