Uploading the author accepted manuscript to my institutional repository is now pretty much the first thing I do as soon as I have a new paper accepted (before even tweeting about it!). But – something which I have been thinking about this week, but had never really considered before – is the question, is it OK to reformat author accepted manuscripts/preprints for sharing?

Journal submission guidelines usually mean that the author has little leeway with formatting (e.g. must be 10 point Arial, double spaced text, that sort of thing). Some simple formatting tweaks would make things easier for the potential reader – for example, reducing the page count – and give the author more creative control over the presentation of the product of their hard work. But is this permissible? I’ve not got a definitive answer yet. For example, the Elsevier article sharing policy states that preprints and accepted manuscripts:

not be added to or enhanced in any way to appear more like, or to substitute for, the published journal article

My question is not motivated by wanting to imitate the formatting of the journal. But does this mean that it is OK to enhance a manuscript in a different way, such as applying your organisations’ in-house style guide, for example? I’d be very interested in hearing peoples’ views and experiences on this. Thanks!

 

Getting to grips with my role at the EdTech Hub, one of the things I have been thinking about recently is a question of where the links and overlaps lie between ‘EdTech for development’ (itself a broad term, with numerous synonyms and not-quite synonyms, such as ICT4D) and my more familiar ground of Open Education. This is something which is both of personal interest to me, and also the EdTech Hub, as being aware of the points of reference across different related communities within the field will potentially help to communicate its work and findings.

I am going to explore this through citation network analysis (no surprises there 😉 ) and will be presenting the findings at the OER20 conference in April (conference abstract below). I’ve got a plan about how to start the network (focusing on the most highly cited papers within the fields, in the past two years) but it remains to be seen how it will develop. I’ll share how it goes here in due course, but I also have two questions:

  • Can you recommend any recent key papers which I should ensure that I include in the sample?
  • I’m going to take open education, ICT4D and digital development as fields to focus on initially – are there any others that you would include?

(Image from the OER20 conference website, CC4.0 licensed – click image for link to source.)

Abstract

The EdTech Hub is a recently-instituted programme, to assess the potential for educational technology to enhance achievement for school-aged learners in low- and middle-income countries (LMICs) and develop and evaluate new educational innovations within partner countries. The work of the Hub takes its lead from the United Nations Sustainable Development Goal 4, which seeks to ‘ensure inclusive and equitable quality education and promote lifelong learning opportunities for all’ (UN, 2019). The aim of the research is to understand how educational technology can support education systems change, improving outcomes for all learners in a scalable and sustainable way. To achieve this, participatory approaches and iterative co-design will be used, embedding the research within local practice and aligning with the theme of care.

Open education plays two roles within the work of the Hub. First, OER sit within the scope of the research, as the potential for Open Educational Resources (OER) to play a beneficial role in addressing educational inequalities is frequently linked to the challenges in educational systems within LMICs. To-date, this has been the focus of several research programmes, and there is an established and growing body of research literature on the topic (Hodgkinson-Williams & Arinto, 2017).

Second, the programme itself is committed to open practices and production of global public goods (Haßler, 2018). The Hub strives to effect positive changes at all levels of educational systems, and to do so will need concerted efforts across related fields such as Information and Communication Technologies for Development (ICT4D), Digital Development and Open Development (Wagner, 2017). This raises a question of where the points of contact lie between these sectors, and open education. Where do the interests of the different communities overlap? Can areas with the potential for novel links be identified?

In this session, these issues will be examined through exploratory citation network analysis of a sample of literature across the sectors. By sampling recent literature on the topics, a network can be constructed using the relationships between papers and the literature they cite as a basis for connections. Salient features in the emergent network structure can reveal the influential highly cited nodes, and reveal sub-groups within a field through clusters, for example. As such, considering the literature cited can be an effective way to visualise different schools of thought within a research area (Weller, Jordan, DeVries & Rolfe, 2018).

References

Haßler, B. (2018) Global goods: Example document for licensing and publishing documents. Zenodo. DOI: 10.5281/zenodo.1201612.

Hodgkinson-Williams, C. & Arinto, P.B. (2017) Adoption and impact of OER in the Global South. Cape Town & Ottawa: African Minds, International Development Research Centre & Research on Open Educational Resources. DOI: 10.5281/zenodo.1005330

United Nations (2019) Sustainable Development Goal 4. United Nations Sustainable Development Goals Knowledge Platform website. Retrieved from: https://sustainabledevelopment.un.org/sdg4

Wagner, D. (2017) Technology for education in low-income countries: Supporting the UN Sustainable Development Goals. ICT-Supported Innovations in Small Countries and Developing Regions, Springer, pp. 51-74.

Weller, M., Jordan, K., DeVries, I. & Rolfe, V. (2018) Mapping the open education landscape: Citation network analysis of historical open and distance education research. Open Praxis, 10(2), 109-126.

This is a re-post of a blog post written recently as part of the EdTech Hub, which I joined in September last year. The last few months have whizzed by; at some point, I *will* fit in writing a post here about my new role! But in the meantime, here is a snippet of what I’ve been working on so far (link to original post in the title, below):

Reviewing the research literature in Educational Technology for Development: Balancing rigour and inclusivity

Katy Jordan Research Associate, EdTech Hub

Within the Research Sphere of the EdTech Hub, one of the main activities within the inception phase has been to conduct a literature review. Undertaking a literature review is established practice as a typical first step in any research project, to establish an informed foundation upon which to conduct further research.

The scope of the EdTech Hub is unusually wide for conducting a literature review. EdTech itself is a term which can be applied to a wide range of technologies used in educational settings. Similarly, the focus on low- and middle-income countries (LMICs) encompasses a diverse range of countries, territories and regions.

Given the wide scope of the programme, a two-stage approach has been used: initially, a large-scale scoping review providing a breadth of understanding of the topic, followed by systematic reviews focused on particular themes. Systematic reviews consider papers published on a very specific topic and compare the findings across studies in detail. Although historically systematic reviews have their roots within health and biomedical sciences, systematic reviews are increasing in popularity as a research methodology within the social sciences (Figure 1).

Figure 1: Number of records returned via Scopus based on the query “a systematic review”, for journal papers within the Social Sciences (orange markers, left-hand axis). 2019 is incomplete (search undertaken 10/10/2019). The trend in number of articles indexed in Scopus for Social Sciences as a whole is included for comparison (grey markers, right-hand axis).

Before conducting a literature review, it is necessary to set certain bounds to define how literature will be found and selected for inclusion. Transparency in this respect is necessary to ensure that others reading the findings can judge to what extent there are limitations or strengths to the evidence base, and hence how reliable the findings are. Setting and clearly articulating the sources of literature and inclusion criteria is also central to the rigour of systematic reviews as a methodology as it facilitates the replicability of approaches and results. Details such as this are published in search protocol documents.

However, there are no established protocols associated with this topic, so there was an immediate question as to how we should tailor our approach. To learn from the field and develop a protocol, we drew upon 22 existing literature review-based papers with foci related to those of the EdTech Hub. Further details will be available shortly in our forthcoming working paper (docs.edtechhub.org/lib/NM6CPLE9, DOI:10.5281/zenodo.3523943), but here we will focus on an issue which was surfaced in the analysis and required particular attention in our context: the issue of balancing academic rigour with inclusivity.

Two of the main components of a literature review protocol are the sources of information, and setting the criteria which found literature will be judged against in order to be included. In terms of finding literature, databases and other data sources, such as academic journals and institutional repositories, were listed in the papers dealing specifically with literature reviews (19 papers). The databases and their frequency, grouped according to subject area, are shown in Figure 2.

Figure 2: Research databases and other data sources used within the sampled documents, arranged according to broader subject area. Number of occurrences in brackets.

In the papers reviewed, a range of criteria was recorded for the inclusion and exclusion of literature. The most frequently used criteria included whether articles were peer-reviewed; being published in an academic journal; presence of particular keywords in title and/or abstract; type of research (e.g. empirical); and language.

Underpinning several of these factors is an assumption that being published in academic journals is a sign of quality, and academic journal articles are readily indexed within the main databases used for literature searches. However, it is important to caution against relying exclusively on academic journals as a source for the EdTech Hub’s review, as publishing in academic journals demonstrates biases according to location (see Figure 3, for example).

 

Figure 3: Global map, where territory size is depicted proportional to the number of scientific journal articles published in 2016. Worldmapper website: https://worldmapper.org/maps/science-paperspublished-2016/?sf_action=get_data&sf_data=results&_sft_product_cat=science&sf_paged=2 (CC BY-NC-SA 4.0)

The EdTech Hub’s approach needs to pay particular attention to setting the bounds of its literature review in a way which balances rigour with inclusivity, which is also important for anyone considering a literature review around LMICs. Database searches will be supplemented by opportunistic searches through experts and informal networks. Grey literature is at risk of being excluded by database searches yet blogs, presentations, informal publications and other communications may play an important role.

Searching for literature is just the first step of the process, and we will be writing further blog posts along the way. Following on from highlighting the need to be aware of biases in academic publishing, in the next blog post from the Research Sphere, we will discuss how we have applied the need for inclusivity to our inclusion criteria.

When I used to work for the TLRP TEL Programme, I had a bit of a side project going to use network analysis to explore the literature cited by the different projects, as a way of pulling out the classic papers and common ground across the programme.

If you’ve not familiar with the TLRP TEL programme, it followed on from the main phase of the Teaching and Learning Research Programme. It was jointly funded by the ESRC and EPSRC, and the main phase ran from 2008 to 2011, during which eight major projects were funded, spanning a range of topics in Technology-Enhanced Learning. These included (thanks to the Internet Archive for links):

  • Echoes: developed a multimodal digital environment to help develop children’s social interactions and support exploratory learning
  • Ensemble: semantic web technologies for case-based learning in Higher Education
  • HapTEL: developed haptic technology for dentistry students
  • Interlife: used Second Life to help support young people to develop social skills and navigate transitions
  • LDSE: A learning design support environment for teachers and lecturers
  • Migen: Intelligent support for mathematical generalisation
  • Personal inquiry: Designing for Evidence-based Enquiry across Formal and Informal Settings of Learning
  • Synergynet: Multi-touch tabletops for collaborative learning

With such a diverse range of technologies and settings, there was logically a question throughout of what it was that brought everyone together; what was distinct about TEL. The programme addressed this by organising events and publications around a range of cross-programme themes – links to the main outputs below:

Another way of exploring the threads that ran through the programme would be to look for commonalities in the literature which the projects cite. This could potentially have formed the basis of a collection of classic works in TEL as a reference bank, a bit like the one that Chris Davies and Rebecca Eynon produced in 2015 (‘Education and Technology – major themes in Education’). I had a first go at this about ten years ago (!), using Touchgraph. It was a bit clunky though, and formatting the data took a while, so sadly it went unfinished. This was before I had discovered Gephi though, and what was a lot of work back then is now a lot easier. (I used this approach on the Openness and Education literature recently, for example).

It has remained something I’d like to have followed-up on though. Being versed in Gephi it is now a lot easier for me to do this, and the time delay at least means that all the papers which were being submitted, reviewed or in press are now available for inclusion.

Getting the data

I couldn’t find my original files, so set about getting the data from scratch. The TEL website is now gone, so I looked up the projects in the ESRC Research Catalogue website. The project pages in the catalogue list a variety of publication types associated with each of the projects. It had been my intention to include the full range of publications in the analysis; however, it quickly became apparent that many of the items were not available online, either having not been online originally or that the links were no longer functional (particularly conference items). As a result, I decided to focus on just the journal papers associated with each project as these were the most consistently available (even a couple of those had disappeared in the relatively short space of time since the end of the programme) (and readers, take note: please use your institutional repositories!).

This yielded a list of 66 target papers across the eight projects. Four could not be located, and a further four could be found, but I wasn’t able to access the full texts. A total of 58 papers were therefore included. From each, the reference lists were copied into Excel, and (where multiple records existed) consistently formatted. The data was exported as a two-column CSV file, of ‘source’ (cited article) and ‘target’ (TEL project paper which cited it) and imported into Gephi.

Exploring the network

The network contained a total of 1,978 nodes (each node being either one of the sampled papers, or any resource cited by them – including other papers, books, chapters, conference items, government sources or websites). When visualising the network, it wasn’t immediately obvious where the bounds of each project lay. If there was nothing in common at all, you would expect to see eight distinct clusters of papers, one for each project. However, in practice, there were varying degrees of overlap (click to view a larger version):

To see where the project boundaries do lie, I colour-coded the sample of journal papers according to projects (I’ve also adjusted the node size the reflect in-degree, to make them stand out a bit more) (again, click to open a bigger version):

(note that the black node was a joint publication, with authors from three of the projects.)

It is quite interesting here to see how closely related the projects were to each other, and also within projects there is a reflection of the extent to which different strands within the projects were writing together or working in parallel.

The nodes that I am really interested in are the ones which represent articles which were cited by two or more of the projects. To narrow it down, all the items which were cited only once can be removed, which dramatically reduces the size of the network (now 315 nodes):

I examined the network to look for examples which met the criteria (cited by at least two of the projects – I didn’t count items associated with the black node though as this had a disproportionate overlap with a paper in one of the three projects it was associated with) and labelled the results:

The result of this was a list of 42 publications (in alphabetical order):


Anastopoulou, S., Sharples, M., Ainsworth, S. & Crook, C. (2009) Personal inquiry: linking the cultures of home and school with technology mediated science inquiry. In Mobile Learning Cultures across Education, Work and Leisure (eds N. Pachler & J. Seipold), pp. 55-58. WLE Centre, London. Proceedings of the 3rd WLE Mobile Learning Symposium, London, 27th March 2009. ISSN 1753?3385.

Anderson, T. & Whitelock, D. (2004) The educational semantic web: Visioning and practicing the future of education. Journal of Interactive Media in Education, 1: 1-15.

Berners-Lee, T., Hendler, J. & Lassila, O. (2001) The Semantic Web. Scientific American 284, 34-43.

Bruner, J.S. (1966) Toward a Theory of Instruction. The Belknap Press, Cambridge, MA.

Cohen, L., Manion, L., & Morrison, K. (2000) Research methods in education (5th ed.). Taylor & Francis Ltd.

Conlon, T. & Pain, H. (1996) Persistent collaboration: a methodology for applied AIED. International Journal of Artificial Intelligence in Education, 7, 219-252.

Dillenbourg, P., Baker, M., Blaye, A. & O’Malley, C. (1996) The evolution of research on collaborative learning. In E. Spada & P. Reiman (Eds), Learning in Humans and Machine: Towards an interdisciplinary learning science (pp. 189-211). Oxford: Elsevier.

Druin, A. (2002) The role of children in the design of new technology. Behaviour and Information Technology 21(1), 1-25.

Engestrom, Y. (1999) Activity theory and individual and social transformation. In Perspectives on Activity Theory (eds Y. Engestrom, R. Miettinen & R.?L. Punamaki), pp. 19-38. Cambridge University Press, Cambridge, UK.

Facer, K. & Sandford, R. (2010) The next 25 years? Future scenarios and future directions for education and technology. Journal of Computer Assisted Learning 26, 74-93.

Fullan, M. (1991) The new meaning of educational change. Cassell, London.

Good, J. & Robertson, J. (2006) CARSS: A framework for learner centred design with children. International Journal of Artificial Intelligence in Education, 16, 381-413.

Greeno, J. (1991) Number sense as situated knowing in a conceptual domain. Journal for Research in Mathematics Education, 22 (3), pp. 170-218.

Kirschner, P., Sweller, J., & Clark, R.E. (2006) Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.

Kobbe, L., Weinberger, A., Dillenbourg, P., Harrer, A., Hamalainen, R., Hakkinen, P. & Fisher, F. (2007) Specifying computer-supported collaboration scripts. International Journal of Computer-Supported Collaborative Learning, 2(2), 211-224.

Kolb, D. A. (1984) Experiential learning: Experience as a source of learning and development. Englewood Cliffs, NJ: Prentice-Hall, Inc.

Koper, R. & Olivier, B. (2004) Representing the learning design of units of learning. Educational Technology and Society 7, 97� 111.

Kraemer, K., Dedrick, J. & Sharma, P. (2009) One laptop per child: vision vs. reality. Communications of the ACM, 52 (6), pp. 66-73.

Landauer, T. (1995) The trouble with computers: usefulness, usability, and productivity. MIT Press, Cambridge.

Laurillard, D. (2002) Rethinking university teaching: A conversational framework for the effective use of educational technology, 2nd ed., London: Routledge.

Lave, J. & Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation . Cambridge: Cambridge University Press.

Lewins, A. & Silver, Christina (2007). Using software in qualitative research: A step-by-step guide. London: Sage.

Merrill, M.D. (1994) Instructional design theory. Englewood Cliffs, NJ: Educational Technology Publication.

Moss, J., & Beatty, R. (2006). Knowledge building in mathematics: supporting collaborative learning in pattern problems. International Journal of Computer-Supported Collaborative Learning, 1, 441-465.

Mulholland P., Collins, T., Gaved, M., Wright, M., Sharples, M., Greenhalgh, C., Kerawalla, L., Scanlon, E., and Littleton, K. (2009). Activity guide: an approach to scripting inquiry learning. In Proceedings of 14th International Conference on Artificial Intelligence in Education, Brighton 6-10 July 2009.

Naismith, L., Lonsdale, P., Vavoula, G., & Sharples, M. (2004). Literature review in mobile technologies and learning, NESTA Futurelab Series.

Noss, R., Healy, L. & Hoyles, C. (1997) The construction of mathematical meanings: connecting the visual with the symbolic. Educational Studies in Mathematics 33, 203-233.

Papert, S. (1980) Mindstorms: Children, Computers and Powerful Ideas. Basic Books, New York.

Piper, A., O’Brien, E., Morris, M., & Winograd, T. (2006) SIDES: A cooperative tabletop computer game for social skills development. Proceedings of the 2006 20th Anniversary Conference on Computer Supported Cooperative Work, 10. ACM.

Roschelle, J. (2003). Unlocking the learning value of wireless mobile devices. Journal of Computer Assisted Learning, 19(3), pp. 260-272.

Roschelle, J., & Pea, R. (2002). A walk on the WILD side: How wireless handhelds may change computer-supported collaborative learning International Journal of Cognition and Technology, 1(1), 145-168.

Schon, D. (1987) Educating the reflective practitioner: Toward a new design for teaching and learning in the professions, San Francisco: Jossey-Bass Publishers.

Schon, D.A. (1983) The Reflective Practioner: how professionals think in action. Temple Smith, London.

Schwartz, D.L., Brophy, S., Lin, X. & Bransford, J.D. (1999) Software for managing complex learning: examples from an educational psychology course. Educational Technology Research and Development 47, 39-59.

Sfard, A. (1998) `On Two Metaphors for Learning and the Dangers of Choosing Just One’, Educational Researcher 27(2): 4-13.

Sharples, M., Taylor, J., & Vavoula, G. (2007). A theory of learning for the mobile age. In R. Andrews, & C. Haythornthwaite (Eds.), The Sage handbook of e-learning research (pp. 221-247). London: Sage Publications Ltd.

Slotta, J.D. (2010). Evolving the classrooms of the future: The interplay of pedagogy, technology and community. In K. M�kitalo-Siegl, F. Kaplan, J. Zottmann, & F. Fischer (Eds.) Classroom of the future. Orchestrating collaborative spaces (pp. 215-242). Rotterdam: Sense.

Vygotsky, L.S. (1978) Mind in society. The development of higher psychological processes. Cambridge, MA: Harvard University Press.

Wang, F., & Hannafin, M. J. (2005). Design-based research and technology-enhanced learning environments. Educational Technology Research and Development, 53(4), 5-23.

Wenger, E. (1998) Communities of Practice: Learning, Meaning and Identity. Cambridge: Cambridge University Press.

Wingate, U. (2007), A framework for transition: supporting learning to learn in higher education. Higher Education Quarterly, 61 (3), pp. 391-405.

Yin, R.K. (2003) Case study research: Design and methods. London: Sage Publications.

There were a couple which I was slightly surprised did not make the cut (Larry Cuban, oversold and underused, and Ton de Jong on computer simulations). The list includes a few methodological texts (e.g. Yin, Cohen et al.) and some which are perhaps slightly too specialised, but I think that overall anyone as a student or newer researcher in TEL would benefit from reading or being aware of most of the literature surfaced by this approach.

The other advantage of time passing between the TEL programme and now is that it would also be possible to look at onward citations of the projects’ papers, which could be interesting to see how the topics addressed by the TEL projects have changed over time and where they sit in relation to current research agendas in TEL.

One of the nice things about living near Cambridge is being able to go to some of the excellent, usually free, events that frequently happen there. At the end of April I headed to Wolfson College for a one-day event entitled The Digital Person: A symposium (2019).

The event was organised by the Hub of All Things (HAT), a multi-institution research project which has developed a data infrastructure for users to be in control of their personal data online. Users would maintain their data in a personal data account (or ‘HATDex’). If platforms wished to gain access to the data, they would send requests and pay a fee. So the idea has the advantages of users being able to control their own data, maintain its accuracy, and be the beneficiary from the monetisation of that data. (I wondered how that would work for educational data online, particularly interactions, where there are multiple actors. E.g. who owns a discussion thread in a MOOC? Individual authors’ posts would be meaningless out of context. Technically I am sure the T&Cs mean that the MOOC owns it, but that doesn’t sit well with the goal of giving users control of their data either. This type of data may be beyond the scope of a personal data account though. It would certainly be relevant to large scale demographic/user characteristics MOOC studies though. Anyway, thinking on about that.).

I was glad to find that the day wasn’t just a sales pitch for why we should all be using a HATDEX though (or rather, it didn’t do so directly, but the presentations were all compelling discussions of broader critical issues in the monetisation of online data and privacy, which would all underscore why something like a HATDEX is a good idea). The presentations are now available online and included:

The presentations were really helpful resources for anyone interested in the topics (& include a lot of links to other fascinating literature) The 2018 Reuters Institute Digital Media Report was referred to several times during the day, and can be found here: http://media.digitalnewsreport.org/wp-content/uploads/2018/06/digital-news-report-2018.pdf A couple of emergent, interdisciplinary fields were referred to which were new to me – human-data interaction, and machine behaviour – which I will now be keeping an eye out for. A very stimulating day, and hopefully I’ll be able to get to the 2020 one too (I think registration is open at Eventbrite already!).

One of my goals for this year is to have a bit of a data analysis refresher. Sometimes I wonder if I may be in a bit of a SPSS-Gephi-nVivo rut! (A useful rut, nonetheless).

My current to-do list includes general use of R and Tableau, and epistemic network analysis and structural equation modelling in particular. Although I have used it before, I’m also hoping to find some time to get properly reacquainted with the Digital Methods Initiatives’ latest tools too.

I find it always helps to have data to play with (not just artificial problem-sets type data). I’m planning to have a good look online for open datasets in Education and Educational Technology-related topics. I’m aware that openly publishing datasets is still quite niche – but there is a growing body out there.

I’ll be looking first of course at Figshare, and UK HE institutional repositories. But is there anywhere else I should be looking? Is there a particular place where US institutions post open datasets, for example? Do you know of any particular projects which have released data openly, either through platforms or just on their websites? (No need to recommend the OER Hub, they are already on my list of course 😉 ). I’d be very interested in any recommendations, and will share what I find at a later date.

Recently I looked out some of my old external hard drives. I was looking specifically for old files related to a side project from years ago which never reached fruition but I’d like to revisit (as it would be a lot easier for me to do now than it was then – anyway, more on that later!).

The files on the hard drives go way back, through the TLRP-TEL Research Programme (~2008 – 2012), right back to my first research job on the Cambridge-MIT Institute Plant Sciences Pedagogy Project (2005-2008). This was my first academic research post, just a couple of months after finishing my masters in Plant Pathology. My role in the project involved working within Plant Sciences, to conduct educational research into undergraduate teaching and learning within the department, and develop e-learning resources to support it.

As it was my first experience of educational and technology-enhanced learning research, it was accompanied by a range of vocabulary which was new to me. This must be quite a common experience for people getting started in educational and pedagogic research; Education, as a research field within the Social Sciences, shows a high level of heterogeneity in terms of researchers’ subject backgrounds (as shown in this chart – figure 4.3 from David Mills’ chapter in McAlpine & Ackerlind’s ‘Becoming an Academic’).

To help with this transition, I kept a document where I noted the words which were new and unfamiliar, as a working glossary, and I came across this when I was excavating the old hard drive. It was interesting to look back at the words which seemed unusual then but are now all so familiar. It made me wonder to what extent the glossary reflected trends in the field at the time – some of the concepts I’m sure are not as prevalent now as then – and how the glossary would look if it was being constructed today.

The list included the following (minus my slightly cringey notes 😉 ):

Action research

Activity theory

Andragogy

Behaviourism

Belief (as in the work of Posner)

BERA

Bloom’s taxonomy

Boundary objects (Wenger / CoP)

Bourdieu (esp. ‘habitus’)

Brokerage (Wenger / CoP)

Case study

Cognitive

Communities of Practice (Wenger)

Concept map

Constructive alignment

Constructivism

CPD

Deep learning (converse surface learning)

Didactic

Discourse analysis

Dweck

EARLI

E-learning

Emic

Engstrom (See ‘activity theory’)

Epistomology

EPSRC

ESRC

Ethnography

Ethnomethodology

Etic

Evidence based (Evidence informed)

Focus group

Gestalt

Habitus (see Bourdieu)

HEA

Hierarchy of needs

Humanistic

Intensional networks (Nardi et al., 2000)

Lexis

Liminal

Maslow – Hierarchy of needs

Meta-cognition

Neuro-linguistic programming (NLP)

Ontogenesis

Ontology

Pedagogy

Phenomenology

Piaget

Polanyi (Tacit knowledge)

Practitioner

Reification

Reflexivity

RLO (Reusable learning object)

Runaway object (Engstrom)

Russellian

Self-efficacy

Self-regulation

Sfard (metaphors for learning)

Social capital

Social constructivism

Stenhouse

Structuralist

Surface learning (converse deep learning)

Tacit knowledge

TEL

Threshold concepts (Meyer and Land)

TLRP

Triangulation

VLE

VRE

Vygotsky

Wenger

Of course, fourteen years later, it’s not possible for me view the field in the way that I did then. I’d be really interested to hear from others though – which were the words which stood out for you when you started out in educational research? What would be on the list now, that wasn’t then?