EdTech 505 Week 9 Reflection: Birth of Evaluation on Technology

Leave a comment

The following passage was assigned to us to reflect upon:

How Evaluation of Technology Was Born

Twenty thousand of years ago, Thok was a renowned hunter of dangerous tigers, consistently bringing home meat for the tribe. As his reputation spread, more and more people from surrounding bands came to Thok’s cave for advice on how to hunt successfully, and safely. They would bring him gifts of food, clothing and such, to get him to sit still and answer their endless questions.

Soon Thok realized that it was safer to teach about hunting than it was to hunt, and he was making a better living as well. He stopped hunting, and took up teaching full time.

Years went by. Thok was becoming elderly — over 30 — and a bit infirm. Word of Thok’s wisdom had spread further and further. The crowds were huge. In fact, people at the fringes could no longer hear the great teacher. Thok’s livelihood was at risk, just when he was least able to leave teaching and return to the dangers of hunting.

Perhaps fear was the mother of invention. That very morning Thok had an inspiration. He saw a large banana leaf lying on the ground. Picking it up, he rolled it into a cone, and spoke into the small end of the cone, pointing the large end toward the crowd. It amplified his voice and everyone could hear! Banana leaves were a kind of magic, it was clear. Thus was educational technology born.

But the amazing events of that morning were not yet over. A young cave person, Val, was skeptical about this banana magic. So Val picked up a leaf that Thok had discarded, draped it on top of her head, and walked around that way for the rest of the day. By nightfall, Val had learned absolutely nothing new about hunting. She tossed the leaf aside, and told all her friends that the old man was wrong: Val’s research had demonstrated conclusively that technology had no role in education.

And that’s how the evaluation of educational uses of technology was born.

So how is this story related to EDTECH 505? Connect the story to our course.

Though not a scholar, I am quite skilled in the tradition of the parable, so I question, how deep should I go with this analysis.  The first point that deserves some attention does not necessarily relate to evaluation or this course, but in program design.  How is it that the crowds are getting bigger over time?  Obviously, the knowledge that he is sharing about hunting must not be very useful, if people keep coming back to hear more.  Ideally, once they learn the skills, they should be out applying those skills rather than coming back for another session of the same.  Even if he is presenting various levels of hunting strategies, he can’t address all of these levels in a group of heterogeneous homo-sapiens.    More design and purpose should be developed within the hunting strategies program, and it seems that no one is taking the time to evaluate this.

Now, focusing our attention on evaluation, obviously the banana leaf represents a tool that was used to increase voice projection so that people further back could hear Thok’s wisdom.  However, the perspective of the bystanders is an interesting one. What would their perspective provide for the evaluation process?.  For the people sitting behind him or to the side, they will benefit less from the use of his tool, while the people in the path of the trajectory of voice will benefit the most.  In this parable, the trajectory of voice represents technology trends, but the people that benefit more from it are those that position themselves in the trajectory.  Perhaps this was another missed opportunity for evaluation. 

Though Thok was sharing his knowledge about hunting, the tool that he chose to use added nothing to his ability as a hunter, therefore his audience would have no use for the tool either, at least not for accomplishing the tasks associated with hunting.  Val was only so fortunate that she did not try to kill a saber-tooth tiger with the banana leaf. Val also made the mistake of making a false judgement (or hypothesis) about the use of the tool, which she thought had the ability to transmit knowledge independently of Thok.  Since her premise was based in fallacy, the conclusion of her evaluation is also false. 

This last point relates to the course most by knowing whether you are evaluating a program that uses tools, or testing a hypothesis about the use of tools.  Val apparently did not know that she was testing a hypothesis.  What we can most learn from Val’s mistake is, while doing an evaluation, do not focus so much of our attention on the tool, but rather on the effect of the tool on the participants. 

EdTech 505 Week 8: Request For Proposal

Leave a comment

505 Week 8 RFP

This week we had to prepare a proposal for a fictional company that is interested in pursuing a marketing campaign for their educational program development package.  The attached document is my proposal.

EdTech 505 Week 7: PSS Evaluation Model

Leave a comment

Use your new understanding about evaluations to address this question: Which evaluation model from chapter 5 would you choose for your own Evaluation Report-Course Project?

I’m warming up to the idea of doing something creative for one of these assignments, but the amount of time I would have to spend on my creative energies made me resist this time.  So for now, I will explain my choice by writing.   My ideas are not completely clear to me yet, but I find that writing my thoughts helps bring more clarity. 

This is my last year to teach in my current school because my wife and I will be moving back to the USA during the summer.  I am quite invested in one group in particular because I have been working with them for 2 years in a row.  This year in particular, I have established a strong presence of Web 2.0 tools with all students setting up Google accounts from the beginning of the year.  In terms of technology integration, I have been more progressive than most of the faculty.  Though many of my colleagues admire what I have done, no one in the English department has made strides to do the same.  An evaluation of this program, Peer Structure and Support, can help the school, and specifically the English department, to make a decision about more teachers implementing it. 

After reviewing the models, I believe the decision-making model would best suit my evaluation.  This selection is based on my consideration of 3 different factors: 1) the program can establish a continuation for technology integration into their approach to learning, 2) the program demonstrates how students can experience that learning is not a spectator sport, but that they can actually play in the game, and 3) the evaluation will lead to my own decision about further development of the program in my professional life. 

My students already have accounts set up in Google for school use.  When they enter 10th grade next year, they could easily continue using these accounts for multiple assignments.  The x-factor is whether or not the next teacher will facilitate these types of assignments or accept student work in digital formats.  This program asks students to create digital peer assessments by using Google Forms.  Showing that this has a positive impact on students will help other teachers see the benefit of such an initiative. 

Many of my colleagues continue to use teacher-centered instruction.  Evaluating this program will allow teachers to view the results of a learner-centered activity.  Both students and teachers struggle to see the constructivist view of learning, which engages the students to discover learning for themselves.  No longer do they have to look at the material through the eyes of someone else’s assessment, but they can learn to identify objectives and create their own assessments. 

Lastly, as I move on, I hope that this evaluation will validate my beliefs about the effectiveness of this program.  I know this is a bad sign for the evaluation, as I am supposed to remain as unbiased as possible.  Nonetheless, as I enter into a new teaching environment (still unknown at this point), I want to be able to make a decision about whether or not my next group of students can benefit from this program.

EdTech 505 Week 7: Vision and Evaluation

Leave a comment

The following account was taken from or course site; I will respond to the question that follows.

Recently, I was talking with three exceptional education teachers at a technology conference. These three colleagues described their classrooms to me. They invited me to visit. So I did.

I went into several classrooms. I approached one of the teachers and asked, “What are you doing?” “I’m teaching reading,” he replied.

Then I asked another teacher, “What are you doing?” “I’m showing these students how to have good study skills,” she said.

Then, I asked the third teacher, “What are you doing?” The woman put down her pen and said, “I’m helping all my students achieve their maximum potential in academics and social skills so that when they go out into the world they will be magnificent contributors.”

Now, all three of these teachers had the same job, but only the last teacher had vision. She could see beyond the daily grind of teaching and see her students contributing mightily to our society. In our lives and in our jobs, sometimes it’s hard for us to stay focused on the larger vision, to rise above the mundane, above the day-to-day.

In history, special people had that vision, one that has benefited us all. In my own work, I too sometimes get caught up in the details of the the daily grind. I go to meetings, read reports, and talk to colleagues. But there are times when the big picture is as clear as day, when I feel truly connected to issues and ideas much larger than myself, larger than any job, larger than any single organization.

How is this story related to EDTECH 505 and, more specifically, to the readings for this week? Do you have to have vision to be a successful evaluator? How does vision fit with choosing the most appropriate evaluation model for a particular program?

I understand this vision very well as an educator, even though I, too, get bogged down in daily tasks.  One of the main reasons I am studying EdTech is because I think it is the best way to prepare myself for the future of education.  The course work has revealed to me the importance of transferring digital literacy and virtual awareness to my students, empowering them as 21st Century learners.  However, applying this vision to the role and process of evaluation is a new challenge for me.

The readings of this week focused on selecting the right evaluation model for a project.  Though not all models are mentioned, several are discussed with details of their advantages and disadvantages.  In any evaluated program there are many variables to consider.  Focusing on each variable requires detailed attention.  Some models require that the evaluator is present through the process and even provides input with regard to the variables; this supports the overall goal because the “big picture” examines each variable closely.  In this case the system variables depend on each other for efficiency, and this affects the “big picture”.  However, not all evaluation models require this type of analysis. In other words, the variables are not entirely dependent on each other.  In education, it is easy to focus on the variables that we perceive to benefit or hinder a program, thus reflecting positively or negatively on the evaluation.  In reality, the benefits of an educational experience are vast, and they are not always seen during the period of an evaluation.  An example of this is the popular account of Albert Einstein as a school boy: bored, unengaged, underachiever, not the ideal participant in a program being evaluated.  However, somewhere along the way a spark of inspiration led him down the path that helped him to change the world.  Unfortunately, the opposite can be true of a poor educational experience and its long standing impact; these participants eventually  either overcome, persevere, fade away into the history of the world, or achieve some level of infamy.

In my evaluation project I believe that I am considering the big picture.  This is my second year with this group of students, but I know my time with them is coming to an end.  I believe the program can benefit them beyond the time that I am there.  The purpose of the evaluation is to account for the results in such a way that the torch can be passed along to next educational chaperone.  With regards to my current role, the downside is that I am close enough to the action that I may have a tendency to focus on the variables.  I will need to account for these variables in the evaluation, since the stakeholders will be interested in the overall impact, but in reality the variables are not completely dependent on each other.  Therefore, my vision will be challenged, not only by focusing on the variables in the program implementation, but also by my bias of the program itself.

EdTech 505 Week 6: American Evaluation Association

Leave a comment

American Evaluation Association

I wish there were a tab in their webpage called “Do you need an evaluator?”.  When the toilet stops working, I understand that I need to call a plumber, but it’s not clear to me at what point do I need to call an evaluator.  From this web page, it is obvious that you cannot easily put your finger on “evaluation”, and say “here it is.”  This is evidenced by the massive amount of text dedicated to their policies page, and the fact that it was changed 6 times, just last year.  However, I know as an organization gets bigger, and each department narrows its sights on their objectives, I suppose it would be helpful to have some sort of consultant that tries to look at the big picture and promote efficiency.

AEA 365

I found this link within the webpage, which made me feel human again while reading about “evaluation”.  The most recent post talked about randomized control trial (RTC), which mostly went over my head, but what was most interesting were the big names that the author used with it:  Institute of Education Sciences and The U.S. Department of Education.  It suggests that the influence of these two institutes is causing the evaluation industry to conform to RTC.  It gave some practical advice and links for someone who wants to learn more about applying RTC.

Reflection

As I was reading through these webpages, the same feeling emerged as when I have read the course textbook; evaluation is necessary and serves a purpose, but I’m probably not the guy to make a career out of it.  I am willing to learn and perhaps I will see things differently, but I imagine that most of the time, when someone is evaluating a program, I will be on the microscope tray, rather than looking through the eyepiece.

As I read about evaluation, here are some other questions that pop in my mind, and I’m curious what might be the answer:

  • Does every program have a life cycle?  If so, when do you decide to let it die or evaluate it?
  • Is it possible to get bogged down in evaluation to such an extent that you sacrifice efficiency?
  • How has evaluation influenced the history of the world?  For example, how did evaluation or lack of, affect the fall of the Roman Empire?

EdTech 505 Week 5: Gap Analysis

Leave a comment

Peer Structure and Support

Brief Overview:  This is a classroom instructional management design that requires students to create peer assessments of literary content and analyze peer responses.  The purpose of this evaluation is to measure the effective use of Google Web 2.0 tools while writing the peer assessment project.

Needs Analysis

As I have indicated previously, this is a program that I have successfully implemented in another school, but I had not yet applied the use of web tools, therefore the peer assessments were created by students and printed out for classroom distribution.  Additionally, when one student failed to meet the deadline, it affected the whole class.  In my current school, creating these assessments in web-based format, we can save the consumption of paper and the web-based collaboration feature makes the whole team accountable to the deadline and they don’t have to rely on one student.

The Goal

The objective is to make the peer assessment process more efficient by using web-based tools.  An additional objective with this current group of students is to measure the effectiveness of peer assessment for developing analytical literacy skills.  At the end of the evaluation, a recommendation can be made to continue with this program for future literacy activities.

The Program (Bridge)

All the students will read the same selected text.  The facilitator will distribute the assessment tasks to student directors, who will meet together to discuss those tasks as they relates to the deadline.  The student directors will meet with their team of students to delegate responsibility among the members.  Each team will work together to form assessment artifacts that target the objective and they will determine what are the appropriate responses to meet those objectives.

Students will be provided time with a computer to create a collaborative document, questionnaire, spreadsheet, and presentation, in addition, time for taking the peer assessments of other students and to analyze the results of their own assessment.


Peer Structure and Support

Philosophy and Goal

Through the process of assessing peer skills and knowledge, the students become more aware of their own ability to interpret literature and analyze peer responses.

Needs Assessment

In order to develop critical thinking skills for the students, the educational experience needs to be relevant for the learner.

The program facilitator needs to provide

  • rich literature for the assessment tasks
  • examples of assessment tasks
  • feedback on assessment artifacts

Students have a need to make a learning experience more relevant by

  • analyzing text for peer assessment tasks
  • analyzing peer responses of assessment tasks

Program Planning

  1. The students will take a pre-survey about assessment tasks.
  2. The whole group of students will read the selected text.
  3. Student groups are formed with a director, who discusses peer assessment tasks and coordinates the collaboration of the team.
  4. Each team will create an online assessment that targets the group’s assessment tasks.

Implementation and Formative Evaluation

During this phase the facilitator will review the assessments created by each group to see if they properly understood the assessment tasks and to clarify any mis-guided assessment artifacts.  Once the peer assessments are ready for distribution, the whole class will respond to the quizzes created by their peers.

Summative Evaluation

After the students have responded to the peer assessments, each team will collect and analyze the data.  They will put together an expository presentation that shows the anonymous responses from the class.  They will identify positive and negative response characteristics to their intended assessment tasks.

EdTech 505 Week 5: Evaluation in Program and Planning Cycles

Leave a comment

ADDIE Model

ADDIE

I remember learning about ADDIE in the EdTech 503, Instructional Design, and the evaluation component was easily understood within the whole ADDIE cycle.  As an educator, my mind is already trained to see evaluation as a component of instruction.  Now that I’m taking EdTech 505, Evaluation, that component has become harder to grasp.  I feel like I have been trying to cut out the piece of the pie called “Evaluate” to see if it tastes different from the rest of the pie.  In other words, even though the pie does have separate pieces, it is all made from the same ingredients; one piece cannot be completely independent from the others.

The ABCs of Evaluation, p.51

Evaluation: Program Cycle

This model does not stray much from the ADDIE model, but you can make the distinction with the purpose of the model.  ADDIE relates more specifically to instructional design, whereas the Program cycle on the right can relate to instruction or any active part of a system or organization, whether it relates to instruction or not.  This model does account for both formative evaluation and summative evaluation, which the ADDIE models does not distinguish.  Also, this model suggest that implementation strategies can change according to the formative evaluation during one rotation of the cycle.

The Planning-Evaluation Cycle

The Planning-Evaluation Cycle

This model does not fit as easily into an educational or instructional situation.  Even though the components of ADDIE and the Program Cycle appear in this model, it is distributed quite differently from the other two models.  For example, this cycle includes analysis and design as part of the evaluation phase.  Nor, does this model clearly distinguish between formative and summative evaluation, it almost suggest that the whole evaluation process is formative.  It appears that this model would be good to analyze some function or feature of an established system, and based on the results in the evaluation phase, the ADDIE model could be applied as an instructional model within the planning phase, which would address the needs that were discovered during the evaluation phases.

References:

Older Entries