It is well known that you can produce relatively good quality machine translations by doing the following:
- Carry out some processing on the source language.
Such as remove text which serves no purpose in the translations (say, imperial measurements in content destined for Europe); re-order some lengthy sentences; mark the boundaries of embedded tags, etc.
- Use custom domain trained machine translation engines.
This is possible with several machine translation providers. If you have an amount of good quality bilingual and monolingual corpora relevant to your subject matter then you can train and build engines which will produce higher quality output than a general public domain engine.
- Post process the raw machine translation output to correct recurrent errors.
To improve overall fluency; replace specific terminology, etc.
We decided to implement this in a fully automated Azure Functions pipeline.
NOTE: Some MT providers have this capability built into their services but we wanted the centralized flexibility to control the pre- and post-editing rules and to be able to mix and match which MT providers we get the translations from.
The pipeline consists of three functions: preedit, translate and postedit. The json payload used for inter-function communication is Jliff. Jliff is an open object graph serialization format specification being designed by an OASIS Technical Committee.
NOTE: Jliff is still in design phase but I’m impatient and it seemed like a good way to test the current snapshot of the format.
The whole thing is easily re-configured and re-deployed, and has all the advantages of an Azure consumption plan.
We can see that this pipeline would be a good candidate for durable functions so once we have time we’ll take a look at those.
Wow, June already. Time flies in the enjoyable world of translation and technology.
I embraced the cloud 6 years ago having evaluated the benefits of Platform and Software as a Service and believed in, what was then, a future vision of all kinds of intelligent distributed services which would be impossible to achieve with a private, internal infrastructure. It was interesting to see that light bulb flash on for non-cloud using attendees at Microsoft’s Red Shirt Dublin event with Scott Guthrie last week.
Scott took us on a whistle-stop tour of Azure facilities from Functions (a few lines of code executing logic on demand) to arrays of GPU’s running Deep Learning algorithms capable of doing face recognition and sentiment analysis.
Within the development team at work our utilization of such technologies continues: Neural Network Machine Translation; Adaptive Machine Translation; Continuous Integration; Distributed Services; and Serverless functions and logic.
At the Research end of the scale, having successfully completed our most recent European Project, I’ve been re-engaging with local research centers and interest groups. This month’s and last month’s Machine Learning Meetups were testament to how dominant Deep Learning is in driving business success and competitiveness.
And because working hard has to be balanced by playing hard I’ve ramped up sailing to three times a week.
The Cork 1720’s I go out in are just wonderful boats.
We started the year with some operationally complex, significant impact projects. Progress has been slower than I would have liked but ensuring we have a solid base upon which to build is critical to the overall success. My impatience is to realize some of the potential gains now but the collateral risk is too high. So, at the midpoint we are looking at a busy next two quarters to get everything we want done but the team is well capable.
Article on the success of the latest European Commission-funded Innovation Action that we participated in.
I’ve been having all kinds of fun saving text (json) representations of translation units (pairs of source and target language strings), sending them from one cloud based service to another and then rebuilding the in-memory object representations from the text representation.
I know that any software engineer will be yawning about now because libraries for doing this kind of thing have existed for a long time. However, it’s been fun for me partly because I’m doing it inside the new Azure Function service, and because some of the objects have abstract relationships (interfaces and sub-classes) introducing some subtleties to getting this to work which took a lot of research to implement.
It relates to the work of the OASIS OMOS TC whose evolving schema for what has been dubbed JLIFF can be seen on GitHub.
The two parts of the object graph requiring the special handling are the array containing the Segment‘s and Ignorable‘s (which implement the ISubUnit interface in my implementation), and the array containing the text and inline markup elements of the Source and Target containers (which implement the IElement interface and subclass AbstractElement in my implementation).
When deserializing the components of these arrays each needs a class which derives from Newtonsoft.Json.JsonConverter.
public class ISubUnitConverter : JsonConverter
public override bool CanConvert(Type objectType)
var canConvert = false;
if (objectType.Name.Equals("ISubUnit")) canConvert = true;
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
var jobject = JObject.Load(reader);
object resolvedType = null;
if (jobject["type"].Value().Equals("segment")) resolvedType = new Segment();
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
Then the classes derived from JsonConverter are passed into the Deserialize method.
Fragment modelin = JsonConvert.DeserializeObject<Fragment>(output,
Over the Christmas break I started to reflect on the nature of service provision in the Language Services industry in the light of new technologies coming out of machine learning and artificial intelligence advances and my own predictions of the influences upon the industry and the industry’s response to them.
There are the recent announcements of adaptive and neural network machine translation; pervasive cloud platforms with ubiquitous connectivity and cognitive capabilities; an upsurge in low-cost, high-benefit open source tooling and frameworks; and many mature api’s and standards.
All of these sophisticated opportunities really do mean that as a company providing services you have to be informed, adaptable, and agile; employ clever, enthusiastic people; and derive joy and satisfaction from harnessing disruptive influences to the benefit of yourselves and your customers.
I do have concerns: How do we sustain the level of investment necessary to stay abreast of all these influences and produce novel services and solutions from them in an environment of very small margins and low tolerance to increased or additional costs?
Don’t get me wrong though. Having spent the last 10 years engaging with world-class research centers such as ADAPT, working alongside thought leading academics and institutions such as DFKI and InfAI, participating in European level Innovation Actions and Projects, and generally ensuring that our company has the required awareness, understanding and expertise, I continue to be positive and enthusiastic in my approach to these challenges.
I am satisfied that we are active in all of the spaces that industry analysts see as being currently significant. To whit: ongoing evaluations of adaptive translation environments and NMT, agile platforms powered by distributed services and serverless architectures, Deep Content (semantic enrichment and NLP), and Review Sentinel (machine learning and text classification).
Less I sound complacent, we have much more in the pipeline and my talented and knowledgeable colleagues are excited for the future.
One of the Technical Committees that I participate in is the OASIS XLIFF OMOS TC. This group is currently working on a json serialization of XLIFF. This fits nicely with our platform of distributed services and providing a standardized, structured format that these services can consume. I’m pleased that the committee members are aligned on the POLA and are working towards an API which is consistent and natural.
One of the use cases could be the simple and fast translation editor which I’ve been amusing myself with, shown below in horizontal layout.
I took a look at Box’s mojito recently. I really like it.
Mature, full-featured translation management systems are large, monolithic applications that can sometimes be difficult to navigate and learn. It is refreshing to see something simple and lightweight. Yes, mojito’s feature set is limited but if you’re a start-up looking for a way to turnaround translations online and have limited technology budget, mojito is worth a look.
It was easy to install comprising of two jar files: one for the web application (includes an in-memory database and embedded web server for trial purposes: and one for the client command line interface.
I wanted to put all of my Angular2 learning into practice so I built a translation editor.
It’s just a prototype at this stage but it has basic inline tag handling and protection and change tracking. The interface is somewhat Ocelot-like.
I hope to add more functionality and use it as a test-bed for using jliff as a backend transport format.