Tag Archives: Internationalization Tag Set

What did it all mean?

I gave two presentations at the SEMANTiCS 2016 conference in Leipzig last week. Both were related to the H2020 FREME Project that I have been a participant of. The first was on the e-Internationalization service which we have contributed to significantly. The second (containing contributions from Felix Sasaki) was on the use of standards (de-facto and ratified) within the FREME e-services in general and our business case implementation in particular.

This was my third attendance at the conference and it once again contained interesting and inspiring presentations, ideas and use cases around linked data.

I sometimes return from these types of conferences, full of innovation and enthusiasm for applying new ideas, to the day-to-day operations of work and become discouraged by the inertia for change and the race to the bottom in terms of price. It is almost impossible to innovate in such an atmosphere. We have looked at the application of machine learning, text classification and various natural language processing algorithms and whilst people may acknowledge that the ideas are good, no-one wants to pilot or evaluate them let alone pay for them.

Any how, I remain inspired by the fields of NLP, Linked-Data, Deep Learning and Semantic networks and may be my day will come.

Standards in the Park

Standards in the Park

On the 7th and 8th of May the Multilingual Web – Language Technologies Group met at the Hotel Park in Bled, Slovenia. Bled is a stunningly beautiful town alongside Lake Bled and situated close to the Austrian border.


Specification and implementation work is progressing well on ITS 2.0 and we are giving some focus to outreach activities with the goal of getting broad adoption of the standard. If you haven’t yet heard of ITS 2.0 or, have heard of the project but don’t know how it could help you, I invite you to visit these resources:

We plan to publish targeted flyers on the use, benefits and details of individual aspects of the standard in the near future but these will get you started and I will be sure to post resource locations for the flyers when they are available.

Several working group members will be presenting at FEISGILTT 2013 (11-12th June 2013) in London which is once again co-hosted with Localization World (12-14th June 2013). This will be a great opportunity to see applications of the standard demonstrated live and be able to talk with members of the working group.

I am also happy to receive email enquiries at my VistaTEC address.

Brains Trust: Karl Fritsche, Jirka Kosek, Milan Karasek, Pablo Nieto, Yves Savourel, Arle Lommel, Felix Sasaki, David Filip, Mauricio del Olmo, Tadej Štajner, David Lewis and Pedro Luis Díez Orzas

Brains Trust: Karl Fritsche, Jirka Kosek, Milan Karasek, Pablo Nieto, Yves Savourel, Arle Lommel, Felix Sasaki, David Filip, Mauricio del Olmo, Tadej Štajner, David Lewis and Pedro Luis Díez Orzas.

A Week Out West

I’m fortunate that my job gives me the opportunity to travel. Last week it was California. The week started well with a positive customer meeting and the arrival of a new employee in our Mountain View office.

Over the course of a number of years with one of our customers we have had a great opportunity to automate and integrate a significant number of business processes. Like ourselves, our customer thrives on and enjoys continuously reinventing, iterating and improving tools. (I’m reluctant to use the word “innovate” as it’s becoming over-used but the term would certainly describe what both of us do regularly.) The exciting possibility that came out of last week’s conversations with their scarily bright engineering team is for us to build a truly scalable, cloud hosted, service bus based business rules engine using data regularly polled from their web services API endpoints.
In addition to existing business related discussions I was also able to utilise the trip to evangelise my more research-based interests and present and get early feedback on new products on the horizon such as ITS 2.0, Linked Data, Review Sentinel and Reviewer’s Workbench.
The one potentially tedious aspect of business travel is the actual relocation of your body geographically. I always prepare well to combat the boredom that journeys – aided by delays – can bring. Tooled up with Kindle, iPad and paperback’s (for use during takeoff and landing) I used the time to catch up (somewhat belatedly) on Breeze, Moderizr, Require, Knockout, Font Awesome and Bootstrap all courtesy of John Papa’s Pluralsight course.
The week also provided the chance to catch up with one of our outsource development partners, Spartan Software in person. Google Hangout doesn’t yet replace the experience of enjoying a beer together. Spartan have been building Reviewer’s Workbench for us. Reviewer’s Workbench is our implementation of the W3C Multilingual Web Internationalization Tag Set 2.0

GALA Innovations in Language Technology

I presented on the Internationalization Tag Set 2.0 and gave a demonstration of Reviewer’s Workbench at yesterday’s GALA “Innovations in Language Technology” pre-Think Latin America event. It seemed to go well: I couldn’t spot anyone sleeping.

Highlights of the various presentations

Vincent Wade, CNGL – Research at CNGL

Prof. Vincent Wade, Director of CNGL set the stage for the afternoon by talking about the challenges of volume, variety and velocity and the arrival of Intelligent Content followed by an overview of the research activities at the Centre.

Steve Gotz talked knowledgeably (as he always does) about the differences between invention and innovation. Seemingly our industry has been guilty of only doing incremental innovation rather than disruptive invention. Luckily CNGL can help with the latter.

Tony O’Dowd, Kantan – Machine Translation and Quality

Tony talked about the dichotomy of machine translation quality metrics used by system developers versus the measurements that are more of interest to those downstream from the raw MT output: Post-Editors, Project Managers, etc. He proposed an interesting way of bridging this divide.

Reinhard Schäler, Rosetta Foundation – Collaborative Translation and Non-market Localization Models

Reinhard talked about the great work that is being done by volunteer translators and how this highly collaborative model could influence the future of the industry in the medium to long term. He also covered the Open Source Solas localization platform which is the backbone of the Rosetta production environment and includes a component called “Solas Match”: a dating application for “connecting translators to content”.


Between presentations there was some stimulating and interesting discussions around the impact that disruptive technologies could have on the industry, the challenges of carrying out innovation in the industry, the future of Language Service Providers and non-market localization.

There’s probably not enough of this type of conversation that happens in the industry, particularly between the service providers, possibly because we are all concerned about differentiating our offerings. However, as Arle Lommel pointed out to me, if those differentiating factors can be assimilated by someone else within the space of an afternoon, it probably wasn’t much of a differentiator!

A Personal Contribution to Global Intelligent Content

Global Intelligent Content

As Chief Technology Officer of VistaTEC, I was fortunate to be one of the founding Industrial Partners of the Science Foundation Ireland funded Centre for Next Generation Localisation (CNGL). CNGL has just received support for a further term with the overall research theme of “Global Intelligent Content”. I therefore thought it appropriate that my first post should actively demonstrate and support this vision.

So, what’s so “intelligent” about this post?

If you have any basic understanding of HTML you’ll know that the page you’re reading is composed of mark-up tags (elements) such as <p>, <span>, and <h1>, etc. The mark-up allows your browser to display the page such that it is easy to comprehend (i.e. headings, paragraphs, bold, italic, etc.) and also interact with (i.e. hyperlinks to other related web documents). You may also know that it can contain “keywords” or “tags”: individual words or phrases which indicate to search engines what the subject matter of this post is. The post certainly does contain all of these.

The page also includes a lot of “metadata“. This metadata conforms to two standards each of which is set to transform the way in which multilingual intelligent content is produced, published, discovered and consumed.

Resource Description Format in Attributes

In layman’s terms RDFa is a way of embedding sense and definition into a document in such a way that non-human agents (machines and computer programs) can read and “understand” the content. RDFa is one mechanism for building the Multilingual Semantic Web.

If you right-click this page in your browser and choose “View Source” you’ll see that it contains attributes (things which allow generic HTML tags to have more unique characteristics) such as property and typeof. These allow web robots to understand those parts of the content that I have decorated at a much more fundamental level. For example, that I created the page, the vocabulary that I have used to describe people, organisations and concepts within the document, and details about them. This data can form the basis of wider inferences regarding personal and content relationships.

Internationalization Tag Set 2.0

ITS 2.0 is a brand new W3C standard which is being funded through the European Commission as the Multilingual Web (Language Technologies) Working Group; part of the W3C Internationalization Activity. Its goal is to define categories of metadata relating to the production and publishing of multilingual web content.

To exemplify this, the overview of ITS 2.0 below was translated from German to English using the Microsoft Bing machine translation engine. Viewing the source of this page and searching for “its-” will locate ITS Localization Quality metadata that I annotated the translations with so as to capture my review of the target English.

“The goal of MultilingualWeb LT (multilingual Web – language technologies) it is to demonstrate how such metadata encoded, safely passed on and used in various processes such as Lokalisierungsworkflows can be and frameworks such as Okapi, machine translation, or CMS systems like Drupal.
Instead of a theoretical and institutional approach to standardization, LT-Web aims to develop implementations, which concretely demonstrates the value of metadata with real systems and users. The resulting conventions and results are documented and published as a W3C standard, including the necessary documentation, data and test suite, as the W3C standardization process requires it.”


I’m very excited about Global Intelligent Content. This post is a very small and personal contribution to the vision but hopefully it illustrates in a simple way what it is about and some of its possibilities.