Monthly Archives: October 2016

Deep Content on Tour

I have just returned from presenting Deep Content on both coasts of North America at the TAUS Annual Conference in Portland, Oregon and the 32nd Localization World in Montréal, Canada.

mepresenting  deepcontent

Deep Content is the combination of natural language processing (NLP) tools and Linked Data. Services such as terminology spotting, Named Entity Recognition (NER) and machine translation can consume and produce data in a common protocol called Natural Language Processing Interchange Format (NIF). A digital text document is sent to each of the services either individually or sequentially in a pipeline. Entities identified by the various services are passed to a graph query service which searches for related information. Finally all of this data is used to enrich the document.

Deep Content uses open standards and enriched content can be serialized as valid HTML 5 and made available as any other page on the web.

We are currently running some beta pilot projects with customers and I’ll post on their results soon. If you’d like to know more leave a comment.

Stepping Back to Race Forward

Deep Learning and Neural Networks: two words that have dominated the press and social media in the localization industry for the last couple of weeks. Google Research’s blog post last week about their success with neural networks to power a production scale machine translation engine sparked a lot of conversation.

I’ve been interested in neural networks for the last couple of years, researching what they’re good at and thinking of potential use cases within Vistatec. I’m not one for wading into debates particularly when I don’t have first hand experience to substantiate my view or add any unique insights. I will say that I’m very excited about this development though. It reinforces again that you cannot stay still in the technology business. New paradigms will shake the ground beneath you.

One of the aspects of NMT that intrigued me was how the encoding and decoding of variable length sentences was achieved given that neural networks essentially work with vectors. It turns out Word Embedding‘s (or Word Vectors) play a part. [Dear readers, if you fully understand the creation and implementation of these, please leave me a comment or reference.] Now I get semi-obsessed when I think I haven’t fully understood a concept and so ensued my journey of the last week:

Binge-watching Stanford’s CS224D and Coursera’s Machine Learning course; revising my secondary school and university calculus; and reading everything I could find on logistic regression, backpropogation and neural network fundamentals including Neural Networks and Machine Learning and Best Machine Learning Resources for Getting Started.

Having filled up my head with concepts and a rough mental map of how it all goes together, the next step was to play with a framework and get my hands dirty. But which one? It seems the ML Framework du jour is Google’s TensorFlow. So, sleeves rolled up, diet coke and nibbles, we’re oo… Linux! I have to have Linux?!

OK, I knew I’d have to assimilate Python but what ensued was another intravenous intake of not unknown but unfamiliar tasks. Provisioning a basic Linux box on AWS and remoting into it from Windows using Putty so I could install the Nvidia TensorFlow Tutorial. Install Docker. Learn the basics of Docker. Install a Linux GUI and figure out how to remote into that from Windows by configuring Remote Desktop protocols. Install Python and TensorFlow and … I have to stop to attend to other commitments.

So, like all great weekly television series this project will have to be continued with another exciting installment.