The Troublesome Basis of Threshold Concepts
Fifteen years ago, Meyer and Land (2003) defined the term “Threshold Concept” (TC) as “a portal, opening up a new and previously inaccessible way of thinking about something”. In reality, TCs aim to map the “turning point” when a student comes closer to understanding the overall topic, just by realising that specific crucial concept (similar to an “Aha!” moment). Therefore, TCs can be said to provide the medium for identifying the “critical path” of necessary steps, leading students from ignorance to understanding.
However, TCs have many shortcomings, due to the fact that, like some of the other learning theories, it needs to assume that all students (should) learn and understand in the same (or similar) way — a dogma born out of convenience. O’Donnell (2010) beautifully summarises the inadequacies of TCs in the conclusions of his critique, starting from the vague generality of troublesome knowledge (i.e. concepts which are most difficult for students to comprehend), and going all the way to calling TCs “an incoherent framework that leads to nonsense or false conclusions”.
Perkins (2006) theorises that the nature of troublesome knowledge can be characterised as alien, difficult, inert, ritual or tacit, based on the nature of each concept. However, Felten (2016) argues that, in addition to the content, the students’ feelings need to be taken into account as well, since “troublesome-ness” can also originate from factors that are completely external to the process of teaching, or the nature of the knowledge. For example, Illeris (2007) posits that mislearning (i.e. the student acquiring erroneous knowledge due to understanding something different than what was taught), and defence against learning due to ambivalence (uncertainty) or resistance (for the student’s own reasons), could also introduce barriers to learning.
In my experience, uninterested students will always look for shortcuts — but the same applies to interested, yet grade-focused, students. I still cannot forget of an anonymous comment I once received as feedback for one of my modules, where the student bluntly stated: “The point of university is to get taught. I did not come here to read a book”. This is the reason why one of the most common questions that I receive every year, unfortunately, is: “Will this be on the exam?”. Mezirow (1991) suggests of ways we can overcome some of these issues by looking into their psychological foundations; for example, identifying the type of distortion keeping a student from comprehending, and then attempting to reflect on its causes. However, in large cohorts, this is not always feasible to address with the current organisational structure.
Occasionally, a student may dishearteningly declare something along the lines of: “Even if I learned everything you are trying to teach me, we (humans) know everything already — there is nothing truly new to discover!”. Anthropologists like Van Gennep (1960) have called this stage “liminal” — i.e. the transitional phase between separation (not knowing something) and incorporation (understanding it). In order to help such students progress into a post-liminal stage, I ask them to indulge me by performing a seemingly straightforward task like, for example, going on Google, isolating the results to the NHS website only (site:nhs.uk) and then searching for the phrases (both in quotes): “The exact cause of” “is unknown“. The thousands of different results that are returned, provide ample proof that we only think we know how the world truly operates, just because we manage to get through the day without accidentally falling into a fourth dimensional black hole. However, in truth, everything we “know” is just an agreement — an invisible contract incorporating commonly accepted signs – like letters, numbers and symbols – in specific orders. This is why, on the very first lecture of each of my modules, I add a touch of “philosophy” by announcing that “everything I am about to tell you is made up – and one day you can make up your own too”.
Professor Philip Saffman declared in an Applied Maths classroom lecture at Caltech in 1978, “The purpose of Mathematics is to eliminate thought” (cited in Gustafson, 2017), so one could perhaps attempt to resolve all this “troublesome-ness” by somehow assigning weights to the connected edges of concepts for a specific topic, and then identifying an absolute “shortest path” between them, thus algorithmically proving the existence and validity of Threshold Concepts. Such a computational device would be ideal, if you had (enough) appropriate data to feed it in advance. However, Davies and Mangan (2007) conclude in their research that TCs need to be introduced to students after they are “in a position to use them to integrate prior knowledge”, while, at the same time, Loertscher (2011) suggests that the development of TCs needs to (somehow) happen early in the study of any discipline, so that student understanding of the core concepts can be built on top of a stronger basis. Commonsensically, a concept that you are unable to define explicitly, cannot purposely transform students.
In addition to this antithesis, more dangers lie throughout the TC identification process. It was Maton (2014) who recently postulated that segmentalism, i.e. the act of linking knowledge – e.g. Hodgins’ (arbitrary) learning objects (Polsani, 2003) – too tightly to its context, can prevent students from being able to interconnect concepts or integrate past knowledge with newly acquired. To address this issue, Maton (op. cit.) defined “semantic gravity” (how closely a concept is tied to its context) and “semantic density” (how much context-specific meaning has been presented) as descriptive factors of how discordant the Intended Learning Outcomes (ILOs) of a programme/course/module/lecture can be with each other. Even if we impose certain modules as prerequisites of others, oftentimes their ILOs overlap, or – even worse – they obfuscate understanding by reusing the same terminology with different meanings in the same context. This can potentially be avoided with a clear and holistic epistemic approach that enables us to semantically define and organise universally (or even institutionally — at first) discrete ILOs, based on the semiotics of their diacritical learning objects.
Since Meyer and Land wrote their report on TCs as part of a project for the “Economic and Social Research Council”, I feel it is appropriate to mention a recent publication produced for the same council by a group of LSE researchers, which can be applied here to illustrate one of the most important reasons why we are using such platitudinous methodologies to identify and communicate our ILOs to students: “habit”. As the researchers point out (Larcom et al., 2017), even when people were exposed to new solutions that seemed to work better than their original ways, most of the observed individuals went back to their old methods solely out of habit. When we have been taught in a specific way and it seemed to have worked for us, it can be difficult to “break the cycle” and embrace something entirely new; hence why universities have been teaching students in similar ways for hundreds of years already.
A couple of years ago, I tweeted that if you want to explain life to a three-year-old, you can just say it is “a constant race of collecting the right signatures” (Gkoutzis, 2016). As always, I was only half-joking. It is my opinion that a strict application of the TC theory in an educational setting gives us – teachers – its “blessing” to perceive students as if they were “sponges” that need to absorb a specific amount of water before we consider them adept enough and permit them to move on to other pools. In 1880, the French novelist Gustave Flaubert proclaimed in a letter to his – also novelist – friend Léon Hennique, “There is no ‘True’. There are only ways of perceiving.“, only to continue by exclaiming “Down with Schools, whatever they may be! Down with words devoid of sense! Down with Academies, Poetics, Principles! I’m astonished that a man of your worth should still let himself fall into such nonsense!” (cited in Steegmuller, 1982). Perhaps a reconsideration of our ways is in order — with some external feedback this time.
Taking a Business Approach to Teaching
Working with the students to define the structure of their programme, as well as enhancing their abilities (e.g. experienced thinking, idea synthesis, etc.) instead of only their knowledge, should be a core goal for any university these days. There is an Eastern proverb that affirms “The teacher and the taught together create the teaching” (cited in Tolle, 2004). This is why some scholars maintain that the purpose and priorities of universities should be focused on considering students as “partners” (a business term), whose soft skills need to be improved more than their memorisation, thus – in a way – suggesting that we should add – as Bernstein (1999) puts it – more horizontal (everyday commonsensical) discourse to the currently heavily vertical (hierarchical) curricula.
This opinion is corroborated by the industry as well. During the recent World Economic Forum (2018b) Annual Meeting at Davos, the founder of the Alibaba Group, Jack Ma, stated that “If we do not change the way we teach, 30 years later we will be in trouble”, and continued by expounding that this is “because the way we teach [and] the things we teach our kids, are the things from the past 200 years — it is knowledge based, and we cannot teach our kids to compete with [a] machine who [sic] is smarter. We have to teach something unique, [so that] a machine can never catch up with us”. Ma was not simply speaking as a technocrat Chinese business tycoon, but also as a former and experienced university lecturer of International Trade. When the panel moderator followed up by asking which skills we should be teaching students these days, Ma responded with “value, believing, independent thinking, teamwork [and] care for others”, by employing “sports” and “art” studies — like for example “music” and “painting”, in order to make sure that our (human) students will become “different” and “better” than machines.
The same opinion was also supported by Minouche Shafik, the Director of LSE, who stated that (World Economic Forum, 2018a) “anything that is routine and repetitive will be automated”, which means students should focus on acquiring “the soft skills — the creative skills”, and also that “teaching people how to learn is the most important skill” teachers can bestow students; i.e. “the research skills, the ability to find information and synthesise it and make something of it”.
This concept was apparently a common theme during the 2018 meeting of the World Economic Forum (2018c), with representatives from LEGO, Unilever, IKEA and even National Geographic reiterating the importance of children playing face-to-face — what some call “real play”, in a world where technological advances and the growth of the Internet are gradually making knowledge “obsolete” (Hope, 2018). However, before you panic and run back to the drawing board to manically rewrite all your Session Plans and Schemes of Work, we should have a look at what technology can do for – and to – us.
First and foremost, it should be noted that, as Bernstein prophetically predicted (Muller et al., 2004), allowing the market (or the government) to direct the specific concepts (threshold or not) that students should focus on during their subject studies, can reintroduce a new “totally pedagogised society”, reminiscent of the days when the Catholic Church used to dictate the validity of scientific hypotheses. This would eventually lead to an economy/technology-led educational culture, allowing the “socially empty notion of trainability” (ibid.) to define a “dumbed down”, performance-based, pedagogy, merely to please the market pro tempore — until the robots catch up with us.
As a Computer Scientist, as well as a technology enthusiast, I find the (industry sponsored) perception of “artistic creativity as human superiority” particularly amusing. In 1983, David Cope (1989), a now former professor of music at UCSC, founded the “Experiments in Musical Intelligence” (EMI) project. His goal was to create a system able to “learn” by “studying” the music of well-known composers, with the aim to produce completely new, yet similar in style, compositions that could fool even the most avid classical music aficionados as to the source of the creation. After training his EMI project with Bach, he let the system run for one afternoon, swiftly generating as many as 5,000 chorales (Blitstein, 2010). Some of these arrangements were so intricate that pianists were unable to perform them, so Cope had to use a high-tech Disklavier piano to reproduce EMI’s music directly from the computer files. Audiences were unable to tell the difference between an original Bach and a musical piece contrived by EMI, which demonstrates that being creative for “creativity’s sake” will not necessarily make us better than machines. Cope himself has stated that “part of the teacher’s job is to challenge students; to make them angry; to make them think for themselves” (Engadget, 2013), and his computer program had achieved exactly this, by exhibiting how easy – yet pointless – it is for students to learn merely by being taught to follow – or copy – the style of another.
The same, of course, applies to the other skills the Davos speakers mentioned. With the help of Machine Learning (ML), AI can now generate paintings in the style of famous painters (Chun, 2017), or in completely new styles that humans had never thought of attempting before (New Scientist, 2017; Elgammal et al., 2017) — even though ownership over the output is still being debated (Abbott, 2016). The artistic field, which sympathetic businessmen and scholars wishfully perceive as “safe from harm” against machines, has already been invaded by our “mechanical overlords”, with their newly gained ability of simulating to appear as “creative” as we are, while also having the benefit of being astonishingly faster in comparison.
Therefore, hoping to overpower technology simply by persisting on the sublimity of our “human ingenuity”, is a fallacy that needs to become apparent to all — especially to teachers who are currently preparing the next generation of students to face this former science fiction scenario as a – now – established reality. Even when we try to take advantage of automated computer tools in order to speed up our work when, for example, we are trying to find and algorithmically synthesise information for our research (Knight, 2017), we end up creating computer applications that can actually do our job better than us. One such example (Yuan et al., 2017) uses “Machine Comprehension” (MC) — one step up from ML, where the program aims to achieve semantic realisation of what it is actually reading, and then it formulates questions based on its understanding of the text. The researchers theorise that by having the AI ask appropriately synthesised questions accompanied by correct answers, it shows it has “understood” what it read — a method that has been formalised for (human) students since the early 1980s (Singer and Donlan, 1982). Going down this path, it is only a matter of when – not if – computers will be able to generate original research — as they already accomplished with art.
Using (and Abusing) Technology in Education
Of course, the idea of technology taking over the tasks of workers who are unable to catch up with it, is definitely not new. In the early 1960s, the British economist John Maynard Keynes (1963) was “warning” about the on-going “technological unemployment” that was claiming the jobs of individuals in an increasing pace. Fortunately, Keynes also postulated that there would be a “temporary phase of maladjustment” until we are able to converge the workforce with the latest developments via education.
It was only recently when Frey and Osborne (2013) boldly stated that “most management, business, and finance occupations, which are intensive in generalist tasks requiring social intelligence”, as well as “most occupations in education, healthcare, arts and media” involve a “low risk” of being immediately automated – i.e. taken over by AI – in the near future. Additionally, Brynjolfsson and McAfee (2012) tactfully conclude that “we [humans] and our world will prosper on the digital frontier” because “the economics of digital information are not of scarcity, but of abundance”. In a world where AI Teaching Assistants not only go undetected, but are also preferred over the slower (in comparison) human ones (Georgia Tech University, 2017), one can only laugh with the “pat on our backs” self-delusion that Academics will survive the digital apocalypse.
Before I turn any readers of this essay into technophobic Luddites who will shortly initiate stockpiling food and water to survive Skynet’s Technological Singularity, allow me to mitigate your fears by saying that we are definitely not there yet. Geoffrey Hinton, a British cognitive psychologist who is considered to be the “godfather” of Artificial Intelligence, recently stated that we should “throw it all away and start again” (Axios, 2017). The supervised Deep Learning techniques we use to train – not teach – AI neural networks, mostly utilise Backpropagation — i.e. trying to reach the wanted result while backtracking to find more optimum ways, which means they basically are a glorified “try everything until you succeed as fast as possible” contrivance.
An Education Scientist could perhaps argue that such neural networks are basically attempting to gradually identify their TC nodes, which could potentially reveal the shortest path towards the preset result, via only the absolutely critical intermediate nodes. However, Hinton argues that, if we wish to eventually achieve unsupervised learning, i.e. the AI being able to follow the data => information => knowledge/understanding => wisdom paradigm (Ackoff, 1989; Bellinger et al., 2004), we probably need to replace this mechanism with something else entirely. Computers — those “fast idiots” as the German philosopher Oswald Schwemmer calls them (cited in Wennemannm, 2013), are unable to become truly innovative and creative because they are trapped into the pre-defined mindset of an engineer’s (i.e. teacher’s) expectations: take this input (course material); this is the output I expect from you as proof that you understood it (exam answers) — now, figure out how to get from here to there! Does this remind you of something? There is a thin line between teaching and training.
In 1966, the architect Cedric Price held a lecture entitled “Technology is the answer, but what was the question?”. This whimsical title was used to attract the attention of the audience to the matter of harmoniously combining technological advancements with modern design. Price was quoted with saying during this lecture that “Technology enables variation that is directly related to the whims or appetites of the user, and I think that technology must be drawn on to allow for people’s appetites changing by the week and not by the year” (cited in Cattani and Ferrante, 2015).
This opinion suggests that we can use technology to focus on the needs of each individual, thus popularising the concepts of the – still prevalent – paradigm of “User Centered Design” (UCD). Unlike his contemporary, Ronald Mace, an architect who preached about a “Universal Design” (UD) approach which would somehow fit all people with disabilities at once (Gkoutzis, 2017), Price advocated a more personalised approach to meet the needs of each user. He designed based on the philosophy that structures should meet and serve the requirements of their users; if they do not achieve this, then they must either be transformed, or even demolished (Milmo, 2014).
It is important to remember that, as Valiant (2013) advocates: “When explicitly programming a machine or teaching a person, one is not starting from a blank slate, but building on features for which the machine or human already has programs”. If you want to write a very specific program for said machine or individual, you must first understand exactly what their system can already do; otherwise, you may not achieve the wanted result — or worse. A personalised learning environment could potentially identify what the user/student already knows, using existing – and ever-growing – interdisciplinary databases of interlinked concepts, containing accumulated wisdom which could then be taken advantage of in order to also “teach” machines — along with the human users. Of course, this could potentially convert this system into another tool that originally aimed to bridge the gap between high and low tech, but ended up expanding the chasm even further.
It has been suggested (Harari, 2017; Lohr, 2015) that data could one day rule the world, creating – in a way – a new form of capitalism which is data-driven, with Big Data and ML/MC AI systems deciding on what is best for us based on their predefined ruleset. But who defines the rules for what a student should learn? Initially, perhaps monetary needs dictate which path to take, with the student (or the parents) searching for the union in a mental Venn diagram, where A = what is popular or well paid, and B = what seems attainable. However, in the end, students should be responsible for setting their own goals, regardless of what their “data fingerprint” – or society – suggests. In this way, students will be “learning for becoming” — not merely for surviving.
So, where does this leave the teacher? While discussing with a colleague regarding the benefits of students attending lectures that are recorded, I came up with an analogy comparing watching the recording from home “a movie” and actually coming to class “the theatre” — definitely a different personal experience. Until machines manage to match the humour and temperament of C3P0 and Marvin (the Paranoid Android), I feel that teachers who truly give themselves in their teaching, will still inspire (even the few) students who appreciate the valuable experience of “studentship”.
Thinking Positive
In the epilogue of his book “Overcomplicated”, Arbesman (2016) ends on a positive note, affirming that we must have “humility” and take an “iterative” approach to understanding while interacting with technology, in order to stop hiding behind the statement of philosopher Ludwig Wittgenstein: “what we cannot talk about we must pass over in silence” (from his 1922 book “Tractatus Logico-Philosophicus”).
This thinking can be extended to teaching technology-related curricula as well. We must consider students as peers, whose only difference from us is the years of experience on a specific domain which they are still trying to master. This experience-gap does not make us better or superior to them in any way, nor apotheosises us to the unique entitlement rank of magically knowing exactly what a student would consider as troublesome knowledge, thus somehow allowing us to define the specific TCs that would be appropriate for each of them. As Shinners-Kennedy (2013) finds, “simply canvassing experts and asking them to classify concepts as threshold concepts is an unreliable approach to identification”.
Regardless of the domain a student is currently studying, we must attempt to communicate our knowledge to them in a personalised manner, which they can appreciate and then use as a stepping stone in finding even higher truths than the ones we originally set out to pursue, enabling us to potentially transcend our preconceptions of any antecedent, limiting, thresholds. Do not dismiss, nor restrict, creativity merely as an “artistic” concept — we can create anything we truly set our mind to: our ideas, our goals, our future; ourselves.
Konstantinos Gkoutzis
kgk.gr
References
Abbott, R. (2016). I Think, Therefore I Invent: Creative Computers and the Future of Patent Law. BCL Rev., 57, p. 1079.
Ackoff, R. L. (1989). From Data to Wisdom. Journal of Applied Systems Analysis, 16 (1). pp. 3-9.
Arbesman, S. (2016). Overcomplicated: Technology at the Limits of Comprehension. Current/Penguin, p. 176.
Axios. (2017). Artificial intelligence pioneer says we need to start over. [online] Available at: https://www.axios.com/artificial-intelligence-pioneer-says-we-need-to-start-over-1513305524-f619efbd-9db0-4947-a9b2-7a4c310a28fe.html [Accessed 11 Feb 2018].
Bellinger, G., Castro, D. and Mills, A. (2004). Data, Information, Knowledge, and Wisdom. [online] Available at: http://www.systems-thinking.org/dikw/dikw.htm [Accessed 11 Feb 2018].
Bernstein, B. (1999). Vertical and horizontal discourse: An essay. British journal of sociology of Education, 20(2), pp. 157-173.
Blitstein, R. (2010). The Cyborg Composer. [online] Pacific Standard. Available at: https://psmag.com/social-justice/triumph-of-the-cyborg-composer-8507 [Accessed 11 Feb 2018].
Brynjolfsson, E. and McAfee, A. (2012). Race against the machine: How the digital revolution is accelerating innovation, driving productivity, and irreversibly transforming employment and the economy. Digital Frontier Press, pp. 71-77.
Cattani, E. and Ferrante, A. (2015). i QUADERNI #06. Journal of Urban Design and Planning, p. 50.
Chun, R. (2017). It’s Getting Hard to Tell If a Painting Was Made by a Computer or a Human. [online] Artsy. Available at: https://www.artsy.net/article/artsy-editorial-hard-painting-made-computer-human [Accessed 11 Feb 2018].
Cope, D. (1989). Experiments in musical intelligence (EMI): Non‐linear linguistic‐based composition. Journal of New Music Research, 18(1-2), pp. 117-139.
Davies, P. and Mangan, J. (2007). Threshold concepts and the integration of understanding in economics. Studies in Higher Education, 32(6), pp. 711-726.
Elgammal, A., Liu, B., Elhoseiny, M. and Mazzone, M. (2017). CAN: Creative Adversarial Networks, Generating “Art” by Learning About Styles and Deviating from Style Norms. arXiv preprint arXiv:1706.07068.
Engadget. (2013). Switched on Bach: David Cope’s computer compositions. [online] Available at: https://www.engadget.com/2013/05/28/david-cope/ [Accessed 11 Feb 2018].
Felten, P. (2016). On the threshold with students. In Threshold Concepts in Practice. SensePublishers, pp. 3-9.
Frey, C.B. and Osborne, M.A. (2013). The future of employment: how susceptible are jobs to computerisation?. Technological Forecasting and Social Change, 114, pp. 254-280.
Georgia Tech University. (2017). Jill Watson, Round Three. [online] Available at: http://www.news.gatech.edu/2017/01/09/jill-watson-round-three [Accessed 11 Feb 2018].
Gkoutzis, K. (2016). How to explain modern life to a 3 year old. [Twitter] Available at: https://twitter.com/kgkoutzis/status/741935906446319617 [Accessed 11 Feb 2018].
Gkoutzis, K. (2017). (Mis)Understanding Universal Design. [online] Available at: https://kgk.gr/2017/11/30/ud20/ [Accessed 11 Feb 2018].
Gustafson, J.L. (2017). The End of Error: Unum Computing. CRC Press, p. XIII.
Harari, Y.N. (2017). Homo Deus: A Brief History of Tomorrow. Vintage, pp. 427-439.
Hope, K. (2018). ‘Let children play to learn work skills’. [online] BBC News. Available at: http://www.bbc.co.uk/news/business-42773557 [Accessed 8 Feb 2018].
Illeris, K. (2007). How we learn: Learning and non-learning in school and beyond. Routledge, pp. 148-165.
Keynes, J. M. (1963). Essays in Persuasion. W.W.Norton & Co., pp. 358-373.
Knight, W. (2017). A new AI algorithm summarizes text amazingly well. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/607828/an-algorithm-summarizes-lengthy-text-surprisingly-well/ [Accessed 11 Feb 2018].
Larcom, S., Rauch, F. and Willems, T. (2017). The benefits of forced experimentation: striking evidence from the London underground network. The Quarterly Journal of Economics, 132(4), pp. 2019-2055.
Loertscher, J. (2011). Threshold concepts in biochemistry. Biochemistry and molecular biology education, 39(1), pp.56-57.
Lohr, S. (2015). Data-ism. Oneworld Publications, pp. 207-215.
Maton, K. (2014). Knowledge and knowers: Towards a realist sociology of education. Routledge, pp. 106-145.
Meyer, J. and Land, R. (2003). Threshold concepts and troublesome knowledge: Linkages to ways of thinking and practising within the disciplines. ETL Project Occ. Report 4, p. 1.
Mezirow, J. (1991). Transformative dimensions of adult learning. Jossey-Bass, pp. 118-144.
Milmo, C. (2014). Cedric Price: The most influential architect you’ve never heard of. [online] The Independent. Available at: http://www.independent.co.uk/arts-entertainment/architecture/cedric-price-the-most-influential-architect-youve-never-heard-of-9852200.html [Accessed 11 Feb 2018].
Muller, J., Davies, B. and Morais, A. (2004). Reading Bernstein, Researching Bernstein. London, RoutledgeFalmer, pp. 2-11.
New Scientist. (2017). Artificially intelligent painters invent new styles of art. [online] Available at: https://www.newscientist.com/article/2139184-artificially-intelligent-painters-invent-new-styles-of-art/ [Accessed 11 Feb 2018].
O’Donnell, R. (2010). A critique of the threshold concept hypothesis and an application in economics. University of Technology Sydney, Working Paper No. 164, p. 15.
Perkins, D. (2006). Constructivism and troublesome knowledge. Overcoming barriers to student understanding, pp. 33-47.
Polsani, P.R. (2003). Use and abuse of reusable learning objects. Journal of Digital information, 3(4). [online] Available at: https://journals.tdl.org/jodi/index.php/jodi/article/view/89/88 [Accessed 11 Feb 2018].
Shinners-Kennedy, D. (2016). How NOT to identify threshold concepts. In Threshold Concepts in Practice. SensePublishers, Rotterdam, pp. 253-267.
Singer, H. and Donlan, D. (1982). Active comprehension: Problem-solving schema with question generation for comprehension of complex short stories. Reading Research Quarterly, pp. 166-186.
Steegmuller, F. (1982). The Letters of Gustave Flaubert: 1857-1880. Harvard University Press, p. 266.
Tolle, E. (2004). The power of now: A guide to spiritual enlightenment. New World Library, p. 103.
Valiant, L. (2013). Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World. Basic Books, pp. 164-165.
Van Gennep, A. (1960). The rites of passage. Routledge, p. 11.
Wennemann, D.J. (2013). Posthuman personhood. University Press of America, p. 2.
World Economic Forum. (2018a). Saving Economic Globalization from Itself. [Online Video]. 23 January 2018. Available from: https://www.youtube.com/watch?v=C0ytDPHC750. [Accessed: 11 Feb 2018], 00:34:40.
World Economic Forum. (2018b). Meet the Leader with Jack Ma. [Online Video]. 24 January 2018. Available from: https://www.youtube.com/watch?v=4zzVjonyHcQ. [Accessed: 11 Feb 2018], 00:43:37.
World Economic Forum. (2018c). To play is to learn. Time to step back and let kids be kids. [online] Available at: https://www.weforum.org/agenda/2018/01/to-play-is-to-learn/ [Accessed 11 Feb 2018].
Yuan, X., Wang, T., Gulcehre, C., Sordoni, A., Bachman, P., Subramanian, S., Zhang, S. and Trischler, A. (2017). Machine Comprehension by Text-to-Text Neural Question Generation. arXiv preprint arXiv:1705.02012.