Corps de l’article

1. Introduction

My students are complaining, still. They have given up trying to wheedle their way out of translation memories (TM); most have at last found that all the messing around with incompatibilities is indeed worth the candle: all my students have to translate with a TM all the time, and I don’t care which one they use. Now they are complaining about something else: machine translation (MT), which is generally being integrated into translation memory suites as an added source of proposed matches, is giving us various forms of TM/MT. These range from the standard translation-memory tools that integrate machine-translation feeds, through to machine translation programs that integrate a translation memory tool. When all the blank target-text segments are automatically filled with suggested matches from memories or machines, that’s when a few voices are raised:

“I’m here to translate,” some say, “I’m not a posteditor!”
“Ah!,” I glibly retort. “Then turn off the automatic-fill option…”

Which they can indeed do. And then often decide not to, out of curiosity to see what the machine can offer, if nothing else.

The answer is glib because, I would argue, statistical-based MT, along with its many hybrids, is destined to turn most translators into posteditors one day, perhaps soon. And as that happens, as it is happening now, we will have to rethink, yet again, the basic configuration of our training programs. That is, we will have to revise our models of what some call translation competence.[1]

2. Reasons for the revolution

MT systems are getting better because they are making use of statistical matches, in addition to linguistic algorithms developed by traditional MT methods. Without going into the technical details, the most important features of the resulting systems are the following:

  1. The more you use them (well), the better they get. This would be the “learning” dimension of TM/MT.

  2. The more they are online (“in the cloud” or on data bases external to the user), the more they become accessible to a wide range of public users, and the more they will be used.

These two features are clearly related in that the greater the accessibility, the greater the potential use, and the greater the likelihood the system will perform well. In short, these features should create a virtuous circle. This could constitute something like a revolution, not just in the translation technologies themselves but also in the social use and function of translation. Recent research indicates that, for Chinese-English translation and other language pairs,[2] statistical MT is now at a level where beginners and Masters-level students with minimal technological training can use it to attain productivity and quality that is comparable with fully human translation, and any gains should then increase with repeated use (Pym 2009; García 2010; Lee and Liao 2011). In more professional situations, the productivity gains resulting from TM/MT are relatively easy to demonstrate.[3]

Of course, as in all good revolutions, the logic is not quite as automatic as expected. When free MT becomes ubiquitous, as could be the case of Google Translate, uninformed users publish unedited electronic translations with it, thus recycling errors that are fed back into the very databases on which the statistics operate. That is, the potentially virtuous circle becomes a vicious one, and the whole show comes tumbling down. One solution to this is to restrict the applications to which an MT feed is available (as Google did with Google Translate in December 2011, making its Application Program Interface a pay-service, and as most companies should do, by developing their own in-house MT systems and databases). A more general solution could be to provide short-term training in how to use MT, which should be of use to everyone. Either way, the circles should all eventually be virtuous.

Even superficial pursuit of this logic should reach the point that most irritates my students: postediting, the correction of erroneous electronic translations, is something that “almost anyone” can do, it seems. When you do it, you often have no constant need to look at the foreign language; for some low-quality purposes, you may have no need to know any foreign language at all, if and when you know the subject matter very well. All you have to do is say what the translation seems to be trying to say. So you are no longer translating, and you are no longer a translator. Your activity has become something else.

But what, exactly, does it become? Is this really the end of the line for translators?

3. Models of translation competence

Most of the currently dominant models of “translation competence” are multi-componential. That is, they bring together various areas in which a good translator is supposed to have skills and knowledge (know how and know that), as well as certain personal qualities, which remain poorly categorized. An important example is the model developed for the European Masters in Translation (EMT) (Figure 1), where it is argued that the translation service provider (since this mostly concerns market-oriented technical translation) needs competence in business (“translation service provision competence”), languages (“language competence”), subject matter (“thematic competence”), text linguistics and sociolinguistics (“intercultural comptence”), documentation (“information mining competence”), and technologies (“technological competence”).

Figure 1

The EMT model of translation competence

The EMT model of translation competence
EMT Expert Group 2009: 7

-> Voir la liste des figures

There is nothing particularly wrong with such models. In fact, they can be neither right nor wrong, since they are simply lists of training objectives, with no particular criteria for success or failure. How could we really say that a particular component is unneeded, or that one is missing? How could we actually test to see whether each component is really distinct from all the others? How could we prove that one of these components is not actually two or three stuck together with watery glue? Could we really object that this particular model has left out something as basic and important as translating skills, understood as the set of skills that actually enable a person to produce a translation i.e., what some other models term “transfer skills” (see for example Neubert 2000)? There is no empirical basis for these particular components, at least beyond teaching experience and consensus. At best, the model represents coherent thought about a particular historical avatar of this thing called translation.[4] The EMT configuration is nevertheless important precisely because it is the result of significant consensus, agreed to by a set of European experts and now providing the ideological backbone for some 54 university-level training programs in Europe, for better or worse.

So what does the EMT model say about machine translation? MT is indeed there, listed under “technology,” and here is what they say: “Knowing the possibilities and limits of MT” (EMT Expert Group 2009: 7). It is thus a knowledge (know that), not a skill (know how), apparently – you should know that the thing is there, but don’t think about doing anything with it.

Admittedly, that was in 2009, an age ago, and no one in the EMT panel of experts was particularly committed to technology (Gouadec, perhaps the closest, remains famous for pronouncing, in a training seminar, that “all translation memories are rotten”). As I predicted some years ago (finding inspiration in Wilss), the multi-componential models are forever condemned to lag behind both technology and the market (Pym 2003).

What happens to this model if we now take TM/MT seriously? What happens if we have our students constantly use tools that integrate statistical MT feeds? Several things might upset multi-componential competence:

  • For a start, “information mining” (EMT Expert Group 2009) is no longer a visibly separate set of skills: much of the information is there, in the TM, the MT, the established glossary, or the online dictionary feed. Of course, you may have to go off into parallel texts and the like to consult the fine points. But there, the fundamental problems are really little different from those of using MT/TM feeds: you have to know what to trust. And that issue of trust would perhaps be material for some kind of macro-skill, rather than separate technological components.

  • The languages component must surely suffer significant asymmetry when TM/MT is providing everything in the target language. It no doubt helps to consult the foreign language in cases of doubt, but it is now by no means necessary to do this as a constant and obligatory activity (we need some research on this). Someone with strong target-language skills, strong area knowledge, and weak source-language skills can still do a useful piece of postediting, and they can indeed use TM/MT to learn about languages.[5]

  • Area knowledge (“thematic competence” [EMT Expert Group 2009]) should be affected by this same logic. Since TM/MT reduces the need for language skills, or can make the need highly asymmetrical, much basic postediting can theoretically be done by area experts who have quite limited foreign-language competence.[6] This means that the language expert, the person we are still calling a translator, could come in and clean up the postediting done by the area expert. That person, the translator, no longer needs to know everything about everything. What they need is great target-language skills and highly developed teamwork skills.

  • The one remaining area is “intercultural competence” (EMT Expert Group 2009), which in the EMT model turns out to be a disguise for text linguistics and sociolinguistics (and might thus easily have been placed under “language competence”). Yes, indeed, anyone working with TM/MT will need tons of these suprasentential text-producing skills, probably to an extent even greater than is the case in fully human translation.

So much for a traditional model of competence. The basic point is that technology is no longer just another add-on component. The active and intelligent use of TM/MT should eventually bring significant changes to the nature and balance of all other components, and thus to the professional profile of the person we are still calling a translator.

4. Reconfiguring the basic terms of translation

Of course, you might insist that the technical posteditor is no longer a translator – the professional profile might now be one variant of the technical communicator, a range of activities that is indeed seeking a professional space. Such a renaming of our profession would effectively protect the traditional models of competence, bringing comfort to a generation of translator-trainers, even if it risks reducing the employability of graduates. Yet careful thought is required before we throw away the term translator altogether, or restrict it to old technologies: our modes of institutional professionalization may be faulty, but they are still more institutionally sound, at least in Europe and Canada, than is that of the technical communicator.

Is it the end of the line for translators? Not at all – some of our skills are quite probably in demand more than ever. The question, as phrased, is primarily one of nomenclature, of whether we still need be called translators. If we do want to retain our traditional name but move with the technology, then a good deal of thought has to be given to the cognitive, professional, and social spaces thus created.

For example, translation theory since the European Renaissance has been based on the binary opposition of source text versus target text (with many different names for the two positions). For as long as translation theory – and research – was based on comparing those two texts, the terms were valid enough. Now, however, we are faced with situations in which the translator is working from a database of some kind (a translation memory, a glossary or at least a set of bitexts), often sent by the client or produced on the basis of the client’s previous projects. In such cases, there is no one text that could fairly be labeled the source (an illusion of origin that should have been dispelled by theories of intertextuality anyway); there are often several competing points of departure: the text, the translation memory, the glossary, and the MT feed, all with varying degrees of authority and trustworthiness. Sorting through those multiple sources is one of the new things that translators have to do, and that we should be able to help them with. For the moment, though, let us simply recognize that the space of translation no longer has two clear sides: the game is no longer played between source and target texts, but between a foreign-language text, a range of databases, and a translation to be used by someone in the future (a point well made in Yamada 2012).[7]

In recognition of this, I propose that the thing that English has long been calling the source text should no longer be called a source. It is a start text (we can still use the initials ST) – an initial point of departure for a workflow, and one among several criteria of quantity for a process that may lead through many other inputs.[8] As for target text, there was never any overriding reason for not simply calling it a translation, or a translated text (TT), if you must, since the actual target concept moved, long ago, downstream to the space of text use.

5. Reconfiguring the social space of translation

An even more substantial reconfiguration of this space involves situations where language specialists (translators or other technical communication experts) work together with area specialists (experts in the particular field of knowledge concerned). This basic form of cooperation was theorized long ago (most coherently in Holz-Mänttäri 1984); it now assumes new dimensions thanks to technologies.

Figure 2 shows a possible workflow that integrates professional translators and non-translator experts (shoddily named the crowd, although they might also be in-house scientists, Greenpeace activists, or long-time users of Facebook). Follow the diagram from top-left: texts are segmented for use in translation memories (TM); the segments are then fed through a machine translation system (MT); the output is postedited by non-translators (crowd translation); the result is then checked by professionals, reviewed for style, corrected, and put back with all layout features and graphical material that might have been removed at the initial segmentation stage, resulting in the final localized content. The important point is that the machine translation output is postedited by non-translators but is then revised by professional translators and edited by professional editors.

There are many possible variations on this model, most of which possibly concern the growing areas of voluntary participation rather than purely commercial applications. Yet if the model holds to any degree at all, I suggest, translators will need skill combinations that are a little different from those contemplated in the traditional models of competence.

Figure 2

Possible localization workflow integrating volunteer translators (crowd translation)

Possible localization workflow integrating volunteer translators (crowd translation)
Carson-Berndsen, Somers, et al. (210: 60)[9]

-> Voir la liste des figures

6. New skills for a new model?

I have suggested elsewhere that we should not be spending a lot of time modeling a multicomponential competence (Pym 2003). It is quite enough to identify the cognitive process of translating as a particular kind of expertise, and to make that the centerpiece of whatever we are trying to do, be it in professional practice or the training of professionals. If we limit ourselves to that frame, the impact of TM/MT is relatively easy to define (see Pym 2011b): whereas much of the translator’s skill-set and effort was previously invested in identifying possible solutions to translation problems (i.e., the generative side of the cognitive process), the vast majority of those skills and efforts are now invested in selecting between available solutions, and then adapting the selected solution to target-side purposes (i.e., the selective side of the cognitive processes). The emphasis has shifted from generation to selection. That is a very simple and quite profound shift, and it has been occurring progressively with the impact of the Internet.

At the same time, however, some of us are still called on to devise training programs and fill those programs with lists of things-to-learn. That is the legitimizing institutional function that models of competence have been called upon to fulfill. The problem, then, is to devise some kind of consensual and empirical way of fleshing out the basic shift, and for justifying the things put in the model.

The traditional method seems to have been abstract expert reflection on what should be necessary. You became a professor, so you know about the skills, knowledge and virtues that got you there, and you try to reproduce them. Or your institution is teaching a range of things in its programs, you think you have been successful, so you arrange those things into a model of competence. An alternative method, explored in recent research by Anne Lafeber (2012) with respect to the recruitment of translators for international institutions, is to see what goes wrong in current training practices, and to work back from there. Lafeber thus conducted a survey of the specialists who revise translations by new recruits; she asked the specialists what they spend most time correcting, and which of the mistakes by new recruits were of most importance. The result is a detailed weighted list of forty specific skills and types of knowledge not of some ideal abstract translator but of the things that are not being done well, or are not being done enough, by current training programs. From that list of shortcomings, one should be able to sort out what has to be done in a particular training program, or what is better left for in-house training within employer institutions. In effect, this constitutes an empirical methodology for measuring negative competence (i.e., the things that are missing, rather than what is there), and thus devising new models of what has to be learned.[10]

It should not be difficult to apply something like this negative approach to the specific skills associated with TM/MT. Anyone who has trained students in the use of any TM/MT tool will have a fair idea of what kinds of difficulties arise, as will the students involved. That is an initial kind of practical empiricism – a place from which one can start to list the possible things-to-teach. However, there is also a small but growing body of controlled empirical research on various aspects of TM/MT, including some projects that specifically compare TM/MT translation with fully human translation. Those studies, most of them admittedly based on the evaluation of products rather than cognitive processes, also give a few strong pointers about the kinds of problems that have to be solved.[11] From experience and from research, one might derive the things to watch out for, bearing in mind that those things then have to be tested in some way, to see if they are actually missing when graduates leave to enter the workplace targeted by any particular training program.

Here, then, is a suggested initial list of the skills that might be missing or faulty; it is thus a proposal for things that might have to be learned somewhere along the line.

6.1. Learn to learn

This is a very basic message that comes from general experience, current educational philosophies of life-long learning, and the recent history of technology: whatever tool you learn to use this year will be different, or out-of-date, within two years or sooner. So students should not learn just one tool step-by-step. They have to be left to their own devices, as much as possible, so they can experiment and become adept at picking up a new tool very quickly, relying on intuition, peer support, online help groups, online tutorials, instruction manuals, and occasionally a human instructor to hold their hand when they enter panic mode (the resources are to be used probably more or less in that order). Specific aspects of this learning to learn might include (where S stands for skill):

S.1.1.

Ability to reduce learning curves (i.e., learn fast) by locating and processing online resources;

S.1.2.

Ability to evaluate the suitability of a tool in relation to technical needs and price;

S.1.3.

Ability to work with peers on the solution of learning problems;

S.1.4.

Ability to evaluate critically the work process with the tool.

The last two points have important implications for what happens in the actual classroom or workspace, as we shall see below.

6.2. Learn to trust and mistrust data

Many of the experiments that compare TM/MT with fully human translation pick up a series of problems related to the ways translators evaluate the matches proposed to them. This involves not seeing errors in the proposed matches (Bowker 2005; Ribas 2007), working on fuzzy matches when it would be better to translate from scratch (a possible extrapolation from O’Brien 2008; Guerberof 2009; Yamada 2012), or not sufficiently trusting authoritative memories (Yamada 2012). There is also a tendency to rely on what is given in the TM/MT database rather than search external sources (Alves and Campos 2009). We might describe all three cases as situations involving the distribution of trust and mistrust in data, and thus as a special kind of risk management. This general ability derives from experience with interpersonal relations in different cultural situations, more than from any strictly technical expertise (see Pym 2012). Teixeira (2011) picks up some of this risk management when he finds, in a pilot experiment, that translators who know the provenance of proposed matches spend less time on them than translators who do not. That is, translators do assess the trustworthiness of proposed matches, and they seem to need to do so. The specific skills would be:

S.2.1.

Ability to check details of proposed matches in accordance with knowledge of provenance and/or the corresponding rates of pay (“discounts”). That is, if you are paid to check 100% matches, then you should do so; and if not, then not;

S.2.2.

Ability to focus cognitive load on cost-beneficial matches. That is, if a proposed translation solution requires too many changes (probably a 70% match or below)[12], then it should be abandoned quickly; if a proposed match requires just a few changes, then only those changes should be made; [13] and if a 100% match is obligatory and you are not paid to check it, then it should not be thought about;[14]

S.2.3.

Ability to check data in accordance with the translation instructions: if you are instructed to follow a TM database exactly, then you should do so (Yamada 2012);[15] if you are required to check references with external sources, then you should do that. And if in doubt, you should try to remove the doubt (i.e., transfer risk by seeking clarifications from the client, which is a skill not specific to TM/MT).

Note that the first two of these skills concern how much translators are paid when using TM/MT. Our focus here is clearly on the technical prowess of adjusting cognitive effort in terms of the prevailing financial rewards. There is nevertheless another side of the coin: considerable political acumen is increasingly required to negotiate and renegotiate adequate rates of pay (with considerable variation for different clients, countries, language directions, and qualities of memories). That side, however, tends to concern changes in the profession as such. It should be discussed in class; negotiations can usefully be simulated; and much can be done to arouse critical awareness of how the rewards of productivity are assessed and distributed. That is, the individual translator should be prepared to do what they can to make work conditions fit performance. Yet the more basic survival skill, in today’s environment, must be to adjust performance to fit work conditions.

6.3. Learn to revise translations as texts

Some researchers report effects that are due not to the use of databases but to the specific type of segmentation imposed by many tools. Indeed, the databases and the segmentation are two quite separate things, at least insofar as they concern cognitive work. Dragsted (2004) points out that sentence-based segmentation can be very different from the segmentation patterns of fully human translation, and the difference may be the cause of some specific kinds of errors; Lee and Liao (2011) find an over-use of pronouns in English-Chinese translation (i.e., interference in the form of excessive cohesion markers); Vilanova (2004) reports a specific propensity to punctuation errors and deficient text cohesion devices; Martín-Mor (2011) concords with this and finds that the use of a translation memory tends to increase linguistic interference in the case of novices, but not so much in the case of professionals (although in-house professionals did have a tendency to literalism). At the same time, he reports cases where TM segmentation heightens awareness of certain microtextual problems, improving the performance of translators with respect to those problems. As for the effects of translation memories, Bédard (2000) pointed out the effect of having a text in which different segments are effectively translated by different translators, resulting in a “sentence salad.” This is presumably something that can be addressed by post-draft revision. At the same time, Dragsted (2004) and others (including Pym 2009; Yamada 2012) find that translators using TM/MT tend to revise each segment as they go along, allowing little time for a final revision of the whole text at the end. This may be a case where current professional practice (revise as you go along) could differ from the skills that should ideally be taught (revise at the end, and have someone else do the same as well). The difference perhaps lies in the degree of quality required, and that estimation should in turn become part of what has to be learned here.

All these reports concern problems for which the solution should be, I propose, heightened attention to the revision process, both self-revision and other-revision (sometimes called “review” in its monolingual variant). The specific skills would be:

S.2.4.

Ability to detect and correct suprasentential errors, particularly those concerning punctuation and cohesion;

S.2.5.

Ability to conduct substantial stylistic revising in a post-draft phase (and hopefully to get paid for it!);

S.2.6.

Ability to revise and review in teams, alongside fellow professionals and area experts, in accordance with the level of quality required.

Note that all these items, under all three heads, concern skills (knowing how) rather than knowledge (knowing that). This might be considered a consequence of the fast rate of change in this field, where all knowledge is provisional anyway – which should in turn question the pedagogical boundary between skills and knowledge (since knowing how to find knowledge becomes more important than internalizing the knowledge itself).

One might also note that the general tenor of these skills is rather traditional. There is a kind of back to basics message implied in the insistence on punctuation, cohesive devices, revision, and the following of instructions (in 2.1 and 2.3). While foreign-language competence may become less important, rather exacting skills in the target language become all the more important. Indeed, attentiveness to target-language detail might be the one over-arching attitudinal component to be added to this list of skills. Issues of cultural difference, rethinking purpose, and effect on target reader are decidedly less important here than they have become in some approaches to translation pedagogy.

Research using the negative skills approach could now take something like this initial list (under all three heads) and check it against the failings of recent graduates, as assessed by their revisers or employers in the market segment targeted by a specific program. This may involve deleting some items and adding new ones; it will quite possibly involve serious attention to over-correction, to the desire of novice revisers to impose their personal language preferences on the whole world (as noted in Mossop 2001). Simple empiricism will hopefully produce a weighted list, telling us which skills we should emphasize in each specific training program.

7. For a pedagogy of TM/MT

In an ideal world, fully completed empirical research should tell us what we need to teach, and then we start teaching. In the real world, we have to teach right now, surrounded by technologies and pieces of knowledge that are all in flux. In this state of relative urgency and hence creativity, there has actually been quite a lot of reflection on the ways MT and postediting can be introduced into teaching practices.[16] O’Brien (2002), in particular, has proposed quite detailed contents for a specific course in MT and postediting, which would include the history of MT, basic programming, terminology management, and controlled language (see Kenny and Way 2001). In compiling the above list, however, I have not assumed the existence of a specific course in MT; I have thought more of the minimal skills required for the effective use of TM/MT technology across a whole program; I have left controlled writing for another course (but each institution should be able to decide such things for itself).

The initial list of skills thus suggests some pointers for the way TM/MT could be taught in a transversal mode, not just in a special course on technologies. I am not proposing a list of simple add-ons, things that should be taught in addition to what we are doing now. On the contrary, we should be envisaging a general pedagogy, the main traits of which must start from the reasons why a specific course on TM/MT may not be required.

7.1. Use of the technologies wherever possible

Since we are dealing with skills rather than knowledge, the development of expertise requires repeated practice. For this reason alone, TM/MT should ideally be used in as much as possible of the student’s translation work, not only in a special course on translation technologies. This is not just because TM/MT can actually provide additional language-learning (see Lee and Liao 2011), nor do I base my argument solely on the supposition that any particular type of TM/MT will necessarily configure the students’ future employment (see Yuste Rodrigo 2001). General usage is also advisable in view of the way the technologies can diffusely affect all other skill sets (see my comments above on the EMT competence model). In many cases, of course, any general usage will be hard to achieve, mostly because some instructors either do not know about TM/MT or see it as distracting from their primary task of teaching fully human translation first (which does indeed have some pedagogical virtue – you have to start somewhere). Our markets and tools are not yet at the stage where fully human translation can be abandoned entirely, and TM/MT should obviously not get in the way of classes that require other tools (many specific translation skills can indeed still be taught with pen and paper, blackboard and chalk, speaking and listening). That said, at the appropriate stage of development, students should be encouraged to use their preferred technologies as much as possible and in as many different courses as possible. This means:

  1. making sure they actually have the technologies on their laptops;

  2. teaching in an environment where they are using their own laptops online;

  3. using technologies that are either free or very cheap, of which there are several very good ones (there is no reason why students should be paying the prices demanded by the market leader).

7.2. Appropriate teaching spaces

From the above, it follows that no one really needs or should want a computer lab, especially of the kind where desks are arranged in such a way that teamwork is difficult and the instructor cannot really see what is happening on students’ screens. The exchanges required are more effectively done around a large table, where the teacher can move from student to student, seeing what is happening on each screen (see Figure 3) (see Pym 2006).

Figure 3

A class on translation technology

A class on translation technology

Ignacio García teaching in Tarragona

-> Voir la liste des figures

7.3. Work with peers

The worst thing that can happen with any technology is that a student gets stuck or otherwise feels lost, then starts clicking on everything until they freeze up and sit there in silence, feeling stupid. Get students to work in pairs. Two people talking stand a better chance of finding a solution, and a much better chance of not remaining silent – they are more likely to show they need help from an instructor.

7.4. Self-analysis of translation processes

Once relative proficiency has been gained in the use of a tool, students should be able to record their on-screen translation processes (there are several free tools for doing this), then play back their performance at an enhanced speed, and actually see what effects the tool is having on their translation performance. This should also be done in pairs, with each student tracking the other’s processes, calculating time-on-task and estimating efficiencies. Students themselves can thus do basic process research, broadly mapping their progress in terms of productivity and quality (see Pym 2009 for some simple models of this). The time lag between research and teaching is thus effectively annulled – they become the one activity, under the general head of action.

This kind of self-analysis becomes particularly important in the business environments – mentioned above – where translators will have to negotiate and renegotiate their pay rates in terms of productivity. Simulation of such negotiations can itself be a valuable pedagogical activity (see Hui 2012). Only if our graduates are themselves able to gauge the extent and value of their cognitive effort will they then be in a position to defend themselves in the marketplace.

7.5. Collaborative work with area experts

The final point to be mentioned here is the possibility of having translation students work alongside area experts who have not been trained as translators, on the assumption that the basic TM/MT technologies should be of use to all. Some inspiration might be sought in a project that had translation students team up with law students (Way 2003), exploring the extent to which the different competences can be of help to each other. This particular kind of teamwork is well suited to technologies designed for non-professional translators (such as Google Translator Toolkit or Lingotek), and can more or less imitate the kind of cooperation envisaged in Figure 2.

In sum, the pedagogy we seek is firmly within the tradition of constructivist pedagogy, and incorporates transversal skills (learning-to-learn, teamwork, negotiating with clients, etc.) that should be desirable with or without technology. Some of the technological skills might be new, or might reach new extensions, but the teaching dynamics need not be. The above list of ten skills, in three categories, is scarcely revolutionary in itself: it is presented here as no more than a possible starting point for creative experimentation within existing frames.