Enterprise Localization Bus

On June 8 I attended the Dublin FEISGILTT [1] 2016 session entitled “Enterprise Localization Bus on the way to Global Customers”. Loïc Dufresne de Virel (Intel), Kevin O’Donnell (Microsoft) and Jan Bareš (Moravia) presented architecture diagrams showing their approaches to integrating the many distributed software applications which need to communicate to form an efficient platform for delivering localized product and content.

Below is a snapshot of our current infrastructure. It combines subsystems which are commercially licensed, internally built, and subscribed to. They are distributed across on-premise servers and cloud infrastructure; some are directly connected, whilst others are loosely coupled; and they are built using both Java Spring and .NET frameworks.

Localization Bus

The platform is continually evolving and there are many things we still want to do. I am very interested in sharing knowledge about this topic – perhaps even forming a community. During the evolution of our work we have come up against challenges such as queue naming, component naming, state management, versioning and route configuration.

Localization: it’s just translating words… :-/

[1] Federated Event for Interoperability Standardization in Globalization, Internationalization, Localization, and Translation Technologies.

2 thoughts on “Enterprise Localization Bus

  1. Kevin O'Donnell

    Thanks for sharing Phil! Interesting to see such a hybrid of technologies and topologies – was this an intentional architecture decision or a consequence of successive acquired/developed technologies (i.e. best tool for the job)?

    Overall, your architecture looks similar to the one we and Loic shared in Dublin. There was some brief post-session discussion on how open source can play a role in enterprise localization service bus development – I think this merits further exploration. I expect some of these services can and should be commoditized among different enterprises.

    I’m also curious as to whether you use XLIFF (1.2) as your processing file format or do you have some proprietary format for your internal processing?

    1. admin Post author

      Kevin, the architecture has, and is evolving but one of the reasons for the dual frameworks is that internally, traditionally our technology stack is Microsoft whilst many of the services we want to use only provide Java API bindings. Some of them are REST but if a higher level API abstraction was available we used that.

      +1 for further discussion and the identification of opportunities to use/contribute to open source.

      The processing format is XLIFF 1.2 (and SDLXLF out of necessity). We have use cases for XLIFF 2.0 and 2.1 though.

Comments are closed.