Tag Archives: FREME

What did it all mean?

I gave two presentations at the SEMANTiCS 2016 conference in Leipzig last week. Both were related to the H2020 FREME Project that I have been a participant of. The first was on the e-Internationalization service which we have contributed to significantly. The second (containing contributions from Felix Sasaki) was on the use of standards (de-facto and ratified) within the FREME e-services in general and our business case implementation in particular.

This was my third attendance at the conference and it once again contained interesting and inspiring presentations, ideas and use cases around linked data.

I sometimes return from these types of conferences, full of innovation and enthusiasm for applying new ideas, to the day-to-day operations of work and become discouraged by the inertia for change and the race to the bottom in terms of price. It is almost impossible to innovate in such an atmosphere. We have looked at the application of machine learning, text classification and various natural language processing algorithms and whilst people may acknowledge that the ideas are good, no-one wants to pilot or evaluate them let alone pay for them.

Any how, I remain inspired by the fields of NLP, Linked-Data, Deep Learning and Semantic networks and may be my day will come.

Disciple of Semantics and Linked Data

A little over two years ago I got to hear about, and had my curiosity piqued by a project being undertaken at the University of Sapienza in Rome. I was definitely interested and excited by the project goal. And so I started my journey of discovery and belief in the power of relationships between data items in the Internet of Things. As someone who frequents the commercial world it can be hard to convince colleagues as to the potential of ambitious ideas. But I was determined.

Two years ago this month I met Roberto Navigli in Athens and learnt about BabelNet. At that time, as I recall, he and his team were starting work on Babelfy. Listening to Roberto explain his vision had me hooked and since that time I’ve been a fan.

Then in September of 2014 I attended the MLODE Hackathon in Leipzig. During that event I got the chance to play with the BabelNet API and get a hands-on feel for what was possible using the resource. This event cemented a number of concepts for me and fuelled my imagination and enthusiasm such that soon afterwards I became a partner in the FREME Project. I would say my status at this point was devotee of semantics and linked data.

Today I returned from Luxembourg where I attended the BabelNet Workshop. This was one of the most interesting, stimulating and well run (wifi problems apart) events I have ever attended. The presentations were interesting, logically arranged, clear, had great support materials and follow-along exercises. Roberto himself is a pleasure to listen to. Varied examples that illustrate his points flow like water from his mind.

And so my pilgrimage to disciple of semantics and multilingual linked data is complete. I have renewed energy and desire to utilize, and contribute to, what is, in my opinion one of the most fascinating resources for people working in the fields of linguistics, computer science and computational linguistics in the world.

As one of my engineers puts the finishing touches to a beta Ocelot plug-in which performs semantic enrichment of content as it is being translated I have been able to secure sufficient commercial backing to hire a computer science intern with knowledge and qualifications in linked data and semantic concepts.

Slavic Love Story at EDF

On the 16th and 17th I attended the European data forum in Luxembourg. I was there to talk about and demonstrate our use case for the content enrichment services of the FREME project. Our stand was well visited and I think the project has made great progress and is well positioned coming up to the end of its first year. Though all project members were tweeting about our presence at the conference, our best publicity came from one of the consortium’s technology partners: Tilde. At one of the European Commission’s bureaucratic centers, with a suit density of 90%, and much talk of data and analytics, Tatjana Gornostaja pulled a master stroke and presented her Love Story.

IMG_7216

IMG_7211

Last week we submitted a proposed amendment to the Change Tracking module of XLIFF 2.0. The amendment would mean that change tracking <item /> elements would support all of the in-line mark up that <source /> and <target /> do. Hopefully it will be accepted and make it into XLIFF 2.1.

I am making slow but steady progress with learning Italian. Having a second natural language (I am fluent in 4 programming) has been a long ambition and I love the stories circulating about the associated benefits such as less risk of dementia and better recovery from heart attack. This is yet another aspect of my life made possible by technology. No physical attendance at classes necessary.

Work and domestic activity levels seem set to continue at the current high rates until Christmas. This will make the holiday a well earned one though I say it myself. Ciao.

SEMANTiCS 2015

On Wednesday 16 and Thursday 17 I attended the SEMANTiCS 2015 conference in Vienna. I attended in order to present a poster for our FREME Project, to demonstrate the Ocelot based application that we have built on top of the FREME services and to catch up on the state of the art from the thought leaders in this space.

It was an enlightening conference with great presentations from large international companies, like Yahoo!, as well as research and public organizations.

Several presentations mentioned Schema.org as being the primary semantic vocabulary underpinning their technology. There was also a poster presented by researchers from the Semantic Technology Institute at the University of Innsbruck on the usage of Schema.org by hotels.

Whilst I didn’t get to talk to anyone who would be a natural customer of FREME, I left the conference with a strong feeling that the FREME e-Services, in helping to produce semantically rich digital content, would definitely serve the needs of the Linked Open Data Cloud and new technologies and services that will inevitably be built on top of it.

I reached this conclusion after listening to these presentations:

  • Complex Event Extraction from Real-Time News Streams, Alexandra La Fleur, Freie Universität, Berlin
  • When RDF alone is not enough – triples, documents, and data in combination, Stephen Buxton, MarkLogic
  • Semantic Search at Yahoo!, Peter Mika, Yahoo!
  • Evolution of Semantic Technologies in Scientific Libraries, Prof. Dr. Klaus Tochtermann, Leibniz Information Centre for Economics

All in all, an interesting and productive trip.

 

Project Meeting in Turin

On 3 and 4 of September the Istituto Superiore Mario Boella hosted a face to face meeting of all the FREME project members. It was a very productive meeting where we each reported progress, demonstrated our latest prototypes and discussed goals for the coming months and topics best brainstormed in person.

We showed how our Ocelot plug-in uses all of the current v0.3 services of FREME platform (internationalization, translation, terminology, named entity recognition and entity linking) to aid translator productivity and build in semantic mark-up to add intelligence, ease of discovery and interactivity into our customers content.

This was my first time in Turin and it really grew on me as time went on. The journey there made me realise how much I take for granted that I’ll be able to fly directly into my destination city and, more often than not, get a taxi to my meeting destination. Choosing to get just one flight meant getting late into Milan, staying overnight there and then getting a train to Turin the following morning. The train journey was a pleasure: clean, fast, plug sockets for every passenger and the beautiful view of the Alps.

A late flight departure home gave me time to walk around Turin a little. After an unsuccessful attempt to see the Shroud of Turin at the Museo della Sindone (because it is only shown one day each year), I got a ticket for the L’ascensore panoramico at the Museo Nazionale del Cinema. What a stunning view, particularly towards the west and north.

650

Prove It!

I gave two important demonstrations this week to senior management:

  1. Phase one of our distributed production platform which uses many enterprise integration architecture patterns
  2. Using the semantic enrichment facilities of the FREME e-services from a proprietary plug-in to Ocelot that we built using its plug-in API.

The distributed platform demonstration went well and showed the potential of the architecture:

  1. Configurable routes from one micro-service to another
  2. Scalability
  3. Fault tolerance
  4. Composability and reuse.

What I particularly like about this architecture is that we can incorporate discrete processes with blocks of translation management system workflow. For example, we can transform assets from one format to another, carry out validation, pre-edit, post-edit, inject, and generally modify and optimise every aspect of the production process.

The Ocelot presentation went better than I even anticipated in that it captured the imagination of two of the attending senior managers: our Vice President of Global Sales commenting that he thought it would open up opportunities to speak to new departments and roles and within organisations who in turn could influence localization stakeholders and buyers.

I’ll be giving both presentations again next week to a customer and the collaborators and Project Officer of the FREME consortium.

FREME

The web site for our new European Commission funded Horizon 2020 project went live on 2015-03-27. I’m very excited about this project. It encompasses many important current topics: Big Linguistic Linked Data; The Semantic Web; NLP Technologies; Linguistic Linked Data Interoperability and Intelligent and Enriched Content.

My goals for the project include new features for our open sourced editor, Ocelot. The planned features will further integrate it with other linguistic technologies and standards, not least the Semantic Web and Linked Linguistic Data Clouds themselves.

Having missed the project kick-off in Berlin in February, I’m looking forward to meeting all of the world-class academic and industry partners.

 

New Year, New Project

Our press release says it all:

Today VistaTEC enthusiastically announced it will be an industrial participant in a second substantial and significant European Commission‑funded Horizon 2020 project. The €3.2 million, two year project entitled “Open Framework of E‑services for Multilingual and Semantic Enrichment of Digital Content” (FREME), will see VistaTEC collaborating on the design and implementation of a commercial‑grade, web‑accessible, linguistic e‑services platform. The framework will utilize Big Linguistic Open and Linked Data to deliver valuable multilingual resources upon which a range of e‑services can be built. These services cover use‑cases which span the digital content life‑cycle: authoring, translation, curation, publishing and discovery, in addition to bringing some leading‑edge content technologies and data models to market.

“This is a key project for my team.” said Phil Ritchie, VistaTEC’s Chief Technology Officer. “The services that will be delivered during the life of this project will provide us with unique and novel paradigms for the way in which we produce multilingual content for our customers.”

The project team includes partners such as the German Research Centre for Artificial Intelligence (DFKI), the Institute for Applied Informatics (InfAI), and Tilde all fresh from the well‑publicized success of the Multilingual Web – Language Technologies project.

Ritchie concluded the announcement saying: “VistaTEC continues to strive to harness disruptive innovations and apply them in unique ways. I’m very excited to be part of such an experienced and knowledgeable consortium which has considerable potential to deliver technological and economic value to the language industry.”

 

2015 R&D Agenda

2015 is set to be very industrious for my Research and Development Team.

On the development side we have ambitious plans for a substantial amount of distributed, cloud based automation and integration. It’s exciting but at the same time a little frustrating as so much of the code will have to be written as opposed to being available in existing libraries. Our first distributed, cloud platform, Synthesis, has lived up to expectations and delivered cost and time savings, scalability and reliability. The new event and action rules engine will provide powerful and flexible real-time configuration. This has set a high bar for the new systems but I am optimistic and enthusiastic to get started.

From a research perspective, in addition to a targeted project with the new ADAPT Centre, we will kick off a European Commission Horizon 2020 project code named “FREME” (Open Framework of E-Services for Multilingual and Semantic Enrichment of Digital Content). This is an exciting opportunity to work again with the Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), the Instituts für Angewandte Informatik (InfAI)and Tilde as well as some new collaborators.

Ocelot will restate its commitment to industry standards and interoperability by supporting XLIFF 2.0. It will also form a prototype client for some of the envisioned FREME services.

Before that however, is a well earned break and the chance to play with some tools and books Santa got for me: JetBrains’ WebStorm, “Python 3 Text Processing with NLTK 3 Cookbook”, and “AngularJS UI Development” from Packtpub.

If you celebrate Christmas, have a wonderful one!