Thanks John, I must say that I agree with you to a large extent. I was actually asked a similar question (well, a more simplistic version) at a job interview a number of years ago and answered along similar lines to your response. The interview panel didn't like my answer, unfortunately, and criticised me as lacking knowledge of evaluation models/frameworks! This really bothered me as I was familiar with many models/frameworks for evaluating educational programs and learning interventions/strategies, but didn't discuss any of them due to the emphasis of the question on the technologies/tools.
The issue has bugged me ever since. After speaking to colleagues from university IT departments and teaching and learning centres/directorates at various institutions I gained a better appreciation of where the panel was coming from, and developed some insight into why they possibly asked the question in the way they did. Although I still don't necessarily fully agree with some of the assumptions inherent in the question and the premises upon which it appeared to be based I've become slightly less dismissive of the idea of a generic, non-context-specific model/framework that can be used to inform technology choices. I can now see at least some practical value in such a model/framework: As you point out, learning contexts and needs and often vary department to department within an institution--and I would even go so far as to add that individual academics/teachers within a department will inevitably differ markedly in terms of their teaching perspectives, styles and approaches. Similarly, students have their own learning styles and technology preferences. It can be near impossible to cater to the precise needs and preferences of all staff and students, at least within an institutional IT model whereby responsibility for learning technologies rests with a centralised department or team. Hence all the talk in recent years about Personal Learning Environments, etc as an alternative to the institutional LMS model--but the reality of the matter is that the centralised model is not going away any time soon, nor are LMSs, and those tasked with institutional-level technology/tool selection must continually make pragmatic decisions about which ones to implement and/or integrate into their LMS. I think they would benefit from a formal way of, and list of criteria for, assessing the suitability of a given technology/tool that is independent of any particular discipline, program or learning scenario. They would also likely benefit from a systematic method of determining which technology/ies or type(s) of tool would be a 'best fit' given a broad set of needs/requirements.
Kind regards,
Mark
________________________________________
From: J.R. Norman [[log in to unmask]] On Behalf Of John Norman [[log in to unmask]]
Sent: 11 May 2011 19:01
To: Lee, Mark
Cc: [log in to unmask]
Subject: Re: Models/frameworks for evaluating learning technologies
Since universities differ one from another in their learning contexts and often differ department to department within the university, surely the smart thing to do would be to keep the "analysis in context" approach and add a self-asssessment diagnostic for the organisation to describe its context in a way that can be matched to the technology recommendations. E.g. if a university offers a problem-based learning model for medical education (many do) - these tools are valuable; if a university emphasises inter-disciplinary studies - these are valuable; for a university with a high ratio of adult learners (or distance learners) - these tools.
I think "what technology is best?" is too naive a question.
HTH
John
On 11 May 2011, at 09:00, Lee, Mark wrote:
> Dear colleagues,
>
> (Apologies for cross-posting... )
>
> I'm attempting to put together a list of models/frameworks can be used to assess the effectiveness and/or appropriateness of a particular type of learning technology (the technology itself, as opposed to a strategy or intervention such as a program, course, activity, etc that might make use of the technology). Any ideas/suggestions and pointers to relevant literature would be greatly appreciated; what I compile from this exercise may eventually expand to become an annotated bibliography.
>
> Obviously some--e.g., those who lean toward the Clark side of the Clark/Kozma debate--might argue that it's debatable whether or not there is really merit in evaluating the technology itself in isolation of the context and application, given that it is not the technology but rather the way in which it is used that gives rise to learning. Nevertheless, university IT departments are often in need of guidance in selecting which specific technologies to implement and make available to academic staff/faculty and students, and in this regard I feel they may benefit from having a systematic and theoretically informed approach to performing their assessments and making their choices.
>
> Thanks very much in advance for your time and assistance!
>
> Kind regards,
>
>
>
> Mark Lee
> Charles Sturt University and University of New England, Australia
>
> + + + + + + + + + + + + + + + + + + + +
>
> DLNET list: Contact the list owner for assistance at [log in to unmask]
>
> For information about joining, leaving and suspending mail (eg during a holiday) see the list website at https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=DLNET
>
> + + + + + + + + + + + + + + + + + + + +
John Norman
Director - CARET
University of Cambridge
[log in to unmask]
+44-1223-765367
+ + + + + + + + + + + + + + + + + + + +
DLNET list: Contact the list owner for assistance at [log in to unmask]
For information about joining, leaving and suspending mail (eg during a holiday) see the list website at https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=DLNET
+ + + + + + + + + + + + + + + + + + + +
|