0 votes

As I am gradually getting more involved in language and dialect adaptations, I keep thinking about what might be the best way for doing this. Earlier this year I posted a question regarding the use of the PT interlinearizer feature vs. Adapt It. But now I’m wondering if there has ever been any discussion about using the interlinearizer vs. doing a Transliteration project (using Encoding Converter and the TEC kit Mapping Editor). Did anyone ever compare the two methods for adaptation work? … and if so, could you point me to a link where I might find a discussion on this topic with pros and cons, etc.

Paratext by (252 points)
reshown

4 Answers

0 votes
Best answer

Isn’t it a wonderful thing to have choices these days about which tools to use for adaptations! And yet at the same time, we end up staring at the supermarket shelf full of choices wondering which brand of cornflakes will taste the best, or cost the least, or be best for our health, etc. So, similarly, with choosing the right tool do do adaptation we need to ask a whole bunch of questions:

  1. Who is going to be doing the actual work (and what level of skill do they have)? This includes both computer skills [which are easy to build] and translation principles skills [which take a lot more time to build].
  2. Does the user understand both the source and target languages adequately? And will they be required to make on-the-spot decisions based on minimal understanding of the text? If so, are they capable of doing so? Will there be a second round of checking to ensure that the right choices were made?
  3. Are there going to be multiple people working on the adaptation, or will it all be handled by one person?
  4. Do you prefer a “slow, manually walking through the entire text” approach, or a “one off rapid global change” approach (which can be refined and re-run as many times as you want). Or perhaps a hybrid with global changes followed by manual tweaks?
  5. How much freedom do you really want the user to have in terms of adapting the text? How will you know that they have been consistent and thorough when revising the text with find-and-replace operations (some of which may have been more comprehensive than others)? And how will you know when they have gone too far and changed the meaning so much that it will require another round of comprehension testing and consultant-checking?
  6. How closely are the dialects and cultures related? Will the same idioms still work in the target?
  7. How predictable are the changes like to be? Will they primarily be phonological/orthographic (and mostly predictable) or are they mainly lexical (and unpredictable)?
  8. Are the languages closely related grammatically? Do they use the same/similar affixes? Is word order most likely to remain unchanged?
  9. Is the source text complete, final and authoritative, or is it still being worked on? And are you expecting the adaptation process to give feedback to the original text (when room for improvement is discovered)?
  10. Is a script conversion (transliteration) also needed as part of the adaptation?
  11. Is this a one-off job, or will there be other materials which will need to follow the same path later (in which case it may be worth a little extra effort to build a bridge)?
  12. How many target dialects are you planning to adapt into? Could you piggy-back off the work done in one to speed up the next?

And the list of questions could go on and on… It would be an interesting exercise to put these questions in a matrix with the various tools in columns and score the appropriateness of each method.

I think that there are a number of situations where a clever set-up using Transliteration method (as a one-off process) will get you 95% of the way, and then final editing or manual tweaking can polish off the job nicely. And so it is possible that you would do both (i.e. Source-A > automatically modified Target-Bv1 > manually tweaked/Interlinearized Target-Bv2).

Remember that with the Transliteration system you can use simple CC tables, TECkit maps, Python scripts, transliterators, and all kinds of powerful tools to do the initial transformation. You can also have a “main” converter followed by a “fall-back” converter which only acts on data that was unchanged by the main converter. You can daisy-chain a bunch of different converters together, and so on.

If using the Interlinearizer in Paratext, you can set it up so that the source and model text both point to the same text - which will auto-fill all the target words. That way you will only have to check/tweak a few words in each verse before approving and moving on. It becomes quite efficient (especially if all the predictable changes have already been dealt with through a previous automatic process), but still takes time to step through.

I hope these ideas are helpful as you stare at the shelf containing a dozen of varieties of corn flakes!

by (2.6k points)
0 votes

Interesting idea. You could in theory use a transliteration map to make consistent changes to one text to produce another. You could have some rules that look at context, but it could get very complicated.

Just to let you know you could look into doing rule-based adaptation using FLExTrans (https://github.com/rmlockwood/FLExTrans/wiki) It relies on FLEx dictionaries and linguistic rules to transform one language to another.

Ron

by (262 points)
reshown
0 votes

This is helpful - especially the idea of using FLEx. Thank you for your replies.

by (252 points)
0 votes

If the changes are largely phonological as it sounds they are, then another option is to use a CC table or teckit to pre-process the data in Adapt it. Using cc tables or Flextrans would require that you know what all the rules are before starting. My opinion would be that if you are going to lots of different texts over along period of time would be to invest in making FlexTran work.

by [Expert]
(2.9k points)

Related questions

0 votes
4 answers
0 votes
3 answers
0 votes
1 answer
0 votes
3 answers
Welcome to Support Bible, where you can ask questions and receive answers from other members of the community.
But if we walk in the light, as he is in the light, we have fellowship with one another, and the blood of Jesus, his Son, purifies us from all sin.
1 John 1:7
2,628 questions
5,370 answers
5,045 comments
1,420 users